00:00:00.000 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 1705 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 2971 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.011 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.012 The recommended git tool is: git 00:00:00.012 using credential 00000000-0000-0000-0000-000000000002 00:00:00.014 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.028 Fetching changes from the remote Git repository 00:00:00.029 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.040 Using shallow fetch with depth 1 00:00:00.040 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.040 > git --version # timeout=10 00:00:00.051 > git --version # 'git version 2.39.2' 00:00:00.051 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.052 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.052 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.087 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.100 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.112 Checking out Revision d55dd09e9e6d4661df5d1073790609767cbcb60c (FETCH_HEAD) 00:00:02.112 > git config core.sparsecheckout # timeout=10 00:00:02.124 > git read-tree -mu HEAD # timeout=10 00:00:02.143 > git checkout -f d55dd09e9e6d4661df5d1073790609767cbcb60c # timeout=5 00:00:02.162 Commit message: "ansible/roles/custom_facts: Add subsystem info to VMDs' nvmes" 00:00:02.162 > git rev-list --no-walk d55dd09e9e6d4661df5d1073790609767cbcb60c # timeout=10 00:00:02.367 [Pipeline] Start of Pipeline 00:00:02.382 [Pipeline] library 00:00:02.383 Loading library shm_lib@master 00:00:02.384 Library shm_lib@master is cached. Copying from home. 00:00:02.401 [Pipeline] node 00:00:17.403 Still waiting to schedule task 00:00:17.403 Waiting for next available executor on ‘vagrant-vm-host’ 00:04:10.919 Running on VM-host-SM4 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:04:10.921 [Pipeline] { 00:04:10.934 [Pipeline] catchError 00:04:10.936 [Pipeline] { 00:04:10.949 [Pipeline] wrap 00:04:10.957 [Pipeline] { 00:04:10.966 [Pipeline] stage 00:04:10.968 [Pipeline] { (Prologue) 00:04:10.989 [Pipeline] echo 00:04:10.990 Node: VM-host-SM4 00:04:10.996 [Pipeline] cleanWs 00:04:11.019 [WS-CLEANUP] Deleting project workspace... 00:04:11.019 [WS-CLEANUP] Deferred wipeout is used... 00:04:11.024 [WS-CLEANUP] done 00:04:11.197 [Pipeline] setCustomBuildProperty 00:04:11.285 [Pipeline] nodesByLabel 00:04:11.287 Found a total of 1 nodes with the 'sorcerer' label 00:04:11.296 [Pipeline] httpRequest 00:04:11.300 HttpMethod: GET 00:04:11.300 URL: http://10.211.164.101/packages/jbp_d55dd09e9e6d4661df5d1073790609767cbcb60c.tar.gz 00:04:11.302 Sending request to url: http://10.211.164.101/packages/jbp_d55dd09e9e6d4661df5d1073790609767cbcb60c.tar.gz 00:04:11.302 Response Code: HTTP/1.1 200 OK 00:04:11.303 Success: Status code 200 is in the accepted range: 200,404 00:04:11.303 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_d55dd09e9e6d4661df5d1073790609767cbcb60c.tar.gz 00:04:11.440 [Pipeline] sh 00:04:11.719 + tar --no-same-owner -xf jbp_d55dd09e9e6d4661df5d1073790609767cbcb60c.tar.gz 00:04:11.735 [Pipeline] httpRequest 00:04:11.739 HttpMethod: GET 00:04:11.739 URL: http://10.211.164.101/packages/spdk_26d44a121d9e45b13d090cd95fff369d55d0fe0d.tar.gz 00:04:11.740 Sending request to url: http://10.211.164.101/packages/spdk_26d44a121d9e45b13d090cd95fff369d55d0fe0d.tar.gz 00:04:11.741 Response Code: HTTP/1.1 200 OK 00:04:11.741 Success: Status code 200 is in the accepted range: 200,404 00:04:11.741 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_26d44a121d9e45b13d090cd95fff369d55d0fe0d.tar.gz 00:04:13.903 [Pipeline] sh 00:04:14.184 + tar --no-same-owner -xf spdk_26d44a121d9e45b13d090cd95fff369d55d0fe0d.tar.gz 00:04:17.479 [Pipeline] sh 00:04:17.761 + git -C spdk log --oneline -n5 00:04:17.761 26d44a121 trace: rename owner to owner_type 00:04:17.761 00918d5c0 trace: change trace_flags_init() to return int 00:04:17.761 dc38e848f trace: make spdk_trace_flags_init() a private function 00:04:17.761 679c3183e lvol: set default timeout to 90.0 in bdev_lvol_create_lvstore 00:04:17.761 93731ac74 rpc: unset default timeout value in arg parse 00:04:17.782 [Pipeline] withCredentials 00:04:17.793 > git --version # timeout=10 00:04:17.806 > git --version # 'git version 2.39.2' 00:04:17.820 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:04:17.823 [Pipeline] { 00:04:17.832 [Pipeline] retry 00:04:17.834 [Pipeline] { 00:04:17.852 [Pipeline] sh 00:04:18.130 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:04:18.445 [Pipeline] } 00:04:18.467 [Pipeline] // retry 00:04:18.473 [Pipeline] } 00:04:18.494 [Pipeline] // withCredentials 00:04:18.507 [Pipeline] httpRequest 00:04:18.511 HttpMethod: GET 00:04:18.512 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:04:18.513 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:04:18.515 Response Code: HTTP/1.1 200 OK 00:04:18.516 Success: Status code 200 is in the accepted range: 200,404 00:04:18.516 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:04:19.739 [Pipeline] sh 00:04:20.018 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:04:21.941 [Pipeline] sh 00:04:22.223 + git -C dpdk log --oneline -n5 00:04:22.223 caf0f5d395 version: 22.11.4 00:04:22.223 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:04:22.223 dc9c799c7d vhost: fix missing spinlock unlock 00:04:22.223 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:04:22.223 6ef77f2a5e net/gve: fix RX buffer size alignment 00:04:22.241 [Pipeline] writeFile 00:04:22.258 [Pipeline] sh 00:04:22.533 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:04:22.543 [Pipeline] sh 00:04:22.822 + cat autorun-spdk.conf 00:04:22.822 SPDK_RUN_FUNCTIONAL_TEST=1 00:04:22.822 SPDK_TEST_NVMF=1 00:04:22.822 SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:22.822 SPDK_TEST_URING=1 00:04:22.822 SPDK_TEST_USDT=1 00:04:22.822 SPDK_RUN_UBSAN=1 00:04:22.822 NET_TYPE=virt 00:04:22.822 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:04:22.822 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:04:22.822 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:22.829 RUN_NIGHTLY=1 00:04:22.832 [Pipeline] } 00:04:22.849 [Pipeline] // stage 00:04:22.865 [Pipeline] stage 00:04:22.867 [Pipeline] { (Run VM) 00:04:22.882 [Pipeline] sh 00:04:23.161 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:04:23.161 + echo 'Start stage prepare_nvme.sh' 00:04:23.161 Start stage prepare_nvme.sh 00:04:23.161 + [[ -n 7 ]] 00:04:23.161 + disk_prefix=ex7 00:04:23.161 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:04:23.161 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:04:23.161 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:04:23.161 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:23.161 ++ SPDK_TEST_NVMF=1 00:04:23.161 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:23.161 ++ SPDK_TEST_URING=1 00:04:23.161 ++ SPDK_TEST_USDT=1 00:04:23.161 ++ SPDK_RUN_UBSAN=1 00:04:23.161 ++ NET_TYPE=virt 00:04:23.161 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:04:23.161 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:04:23.161 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:23.161 ++ RUN_NIGHTLY=1 00:04:23.161 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:04:23.161 + nvme_files=() 00:04:23.161 + declare -A nvme_files 00:04:23.161 + backend_dir=/var/lib/libvirt/images/backends 00:04:23.161 + nvme_files['nvme.img']=5G 00:04:23.161 + nvme_files['nvme-cmb.img']=5G 00:04:23.161 + nvme_files['nvme-multi0.img']=4G 00:04:23.161 + nvme_files['nvme-multi1.img']=4G 00:04:23.161 + nvme_files['nvme-multi2.img']=4G 00:04:23.161 + nvme_files['nvme-openstack.img']=8G 00:04:23.161 + nvme_files['nvme-zns.img']=5G 00:04:23.161 + (( SPDK_TEST_NVME_PMR == 1 )) 00:04:23.161 + (( SPDK_TEST_FTL == 1 )) 00:04:23.161 + (( SPDK_TEST_NVME_FDP == 1 )) 00:04:23.161 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:04:23.162 + for nvme in "${!nvme_files[@]}" 00:04:23.162 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:04:23.162 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:04:23.162 + for nvme in "${!nvme_files[@]}" 00:04:23.162 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:04:23.162 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:04:23.162 + for nvme in "${!nvme_files[@]}" 00:04:23.162 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:04:23.162 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:04:23.162 + for nvme in "${!nvme_files[@]}" 00:04:23.162 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:04:23.162 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:04:23.162 + for nvme in "${!nvme_files[@]}" 00:04:23.162 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:04:23.162 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:04:23.162 + for nvme in "${!nvme_files[@]}" 00:04:23.162 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:04:23.420 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:04:23.420 + for nvme in "${!nvme_files[@]}" 00:04:23.420 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:04:23.420 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:04:23.420 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:04:23.420 + echo 'End stage prepare_nvme.sh' 00:04:23.420 End stage prepare_nvme.sh 00:04:23.433 [Pipeline] sh 00:04:23.714 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:04:23.714 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -H -a -v -f fedora38 00:04:23.714 00:04:23.714 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:04:23.714 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:04:23.714 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:04:23.714 HELP=0 00:04:23.714 DRY_RUN=0 00:04:23.714 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img, 00:04:23.714 NVME_DISKS_TYPE=nvme,nvme, 00:04:23.714 NVME_AUTO_CREATE=0 00:04:23.714 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img, 00:04:23.714 NVME_CMB=,, 00:04:23.714 NVME_PMR=,, 00:04:23.714 NVME_ZNS=,, 00:04:23.714 NVME_MS=,, 00:04:23.714 NVME_FDP=,, 00:04:23.714 SPDK_VAGRANT_DISTRO=fedora38 00:04:23.714 SPDK_VAGRANT_VMCPU=10 00:04:23.714 SPDK_VAGRANT_VMRAM=12288 00:04:23.714 SPDK_VAGRANT_PROVIDER=libvirt 00:04:23.714 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:04:23.714 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:04:23.714 SPDK_OPENSTACK_NETWORK=0 00:04:23.714 VAGRANT_PACKAGE_BOX=0 00:04:23.714 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:04:23.714 FORCE_DISTRO=true 00:04:23.714 VAGRANT_BOX_VERSION= 00:04:23.714 EXTRA_VAGRANTFILES= 00:04:23.714 NIC_MODEL=e1000 00:04:23.714 00:04:23.714 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt' 00:04:23.714 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:04:27.053 Bringing machine 'default' up with 'libvirt' provider... 00:04:27.989 ==> default: Creating image (snapshot of base box volume). 00:04:28.247 ==> default: Creating domain with the following settings... 00:04:28.247 ==> default: -- Name: fedora38-38-1.6-1701806725-069-updated-1701632595-patched-kernel_default_1713196617_0192022f0dab658dad39 00:04:28.247 ==> default: -- Domain type: kvm 00:04:28.247 ==> default: -- Cpus: 10 00:04:28.247 ==> default: -- Feature: acpi 00:04:28.247 ==> default: -- Feature: apic 00:04:28.247 ==> default: -- Feature: pae 00:04:28.248 ==> default: -- Memory: 12288M 00:04:28.248 ==> default: -- Memory Backing: hugepages: 00:04:28.248 ==> default: -- Management MAC: 00:04:28.248 ==> default: -- Loader: 00:04:28.248 ==> default: -- Nvram: 00:04:28.248 ==> default: -- Base box: spdk/fedora38 00:04:28.248 ==> default: -- Storage pool: default 00:04:28.248 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1701806725-069-updated-1701632595-patched-kernel_default_1713196617_0192022f0dab658dad39.img (20G) 00:04:28.248 ==> default: -- Volume Cache: default 00:04:28.248 ==> default: -- Kernel: 00:04:28.248 ==> default: -- Initrd: 00:04:28.248 ==> default: -- Graphics Type: vnc 00:04:28.248 ==> default: -- Graphics Port: -1 00:04:28.248 ==> default: -- Graphics IP: 127.0.0.1 00:04:28.248 ==> default: -- Graphics Password: Not defined 00:04:28.248 ==> default: -- Video Type: cirrus 00:04:28.248 ==> default: -- Video VRAM: 9216 00:04:28.248 ==> default: -- Sound Type: 00:04:28.248 ==> default: -- Keymap: en-us 00:04:28.248 ==> default: -- TPM Path: 00:04:28.248 ==> default: -- INPUT: type=mouse, bus=ps2 00:04:28.248 ==> default: -- Command line args: 00:04:28.248 ==> default: -> value=-device, 00:04:28.248 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:04:28.248 ==> default: -> value=-drive, 00:04:28.248 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-0-drive0, 00:04:28.248 ==> default: -> value=-device, 00:04:28.248 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:28.248 ==> default: -> value=-device, 00:04:28.248 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:04:28.248 ==> default: -> value=-drive, 00:04:28.248 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:04:28.248 ==> default: -> value=-device, 00:04:28.248 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:28.248 ==> default: -> value=-drive, 00:04:28.248 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:04:28.248 ==> default: -> value=-device, 00:04:28.248 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:28.248 ==> default: -> value=-drive, 00:04:28.248 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:04:28.248 ==> default: -> value=-device, 00:04:28.248 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:28.248 ==> default: Creating shared folders metadata... 00:04:28.248 ==> default: Starting domain. 00:04:30.848 ==> default: Waiting for domain to get an IP address... 00:04:48.933 ==> default: Waiting for SSH to become available... 00:04:48.933 ==> default: Configuring and enabling network interfaces... 00:04:53.120 default: SSH address: 192.168.121.205:22 00:04:53.120 default: SSH username: vagrant 00:04:53.120 default: SSH auth method: private key 00:04:55.022 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:05:03.172 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:05:09.781 ==> default: Mounting SSHFS shared folder... 00:05:11.682 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:05:11.682 ==> default: Checking Mount.. 00:05:12.616 ==> default: Folder Successfully Mounted! 00:05:12.616 ==> default: Running provisioner: file... 00:05:13.550 default: ~/.gitconfig => .gitconfig 00:05:14.118 00:05:14.118 SUCCESS! 00:05:14.118 00:05:14.118 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:05:14.118 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:05:14.118 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:05:14.118 00:05:14.127 [Pipeline] } 00:05:14.145 [Pipeline] // stage 00:05:14.154 [Pipeline] dir 00:05:14.154 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt 00:05:14.156 [Pipeline] { 00:05:14.170 [Pipeline] catchError 00:05:14.171 [Pipeline] { 00:05:14.185 [Pipeline] sh 00:05:14.463 + vagrant ssh-config --host vagrant 00:05:14.463 + sed -ne /^Host/,$p 00:05:14.463 + tee ssh_conf 00:05:18.660 Host vagrant 00:05:18.660 HostName 192.168.121.205 00:05:18.660 User vagrant 00:05:18.660 Port 22 00:05:18.660 UserKnownHostsFile /dev/null 00:05:18.660 StrictHostKeyChecking no 00:05:18.660 PasswordAuthentication no 00:05:18.660 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1701806725-069-updated-1701632595-patched-kernel/libvirt/fedora38 00:05:18.660 IdentitiesOnly yes 00:05:18.660 LogLevel FATAL 00:05:18.660 ForwardAgent yes 00:05:18.660 ForwardX11 yes 00:05:18.660 00:05:18.673 [Pipeline] withEnv 00:05:18.675 [Pipeline] { 00:05:18.691 [Pipeline] sh 00:05:18.971 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:05:18.971 source /etc/os-release 00:05:18.971 [[ -e /image.version ]] && img=$(< /image.version) 00:05:18.971 # Minimal, systemd-like check. 00:05:18.971 if [[ -e /.dockerenv ]]; then 00:05:18.971 # Clear garbage from the node's name: 00:05:18.971 # agt-er_autotest_547-896 -> autotest_547-896 00:05:18.971 # $HOSTNAME is the actual container id 00:05:18.971 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:05:18.971 if mountpoint -q /etc/hostname; then 00:05:18.971 # We can assume this is a mount from a host where container is running, 00:05:18.971 # so fetch its hostname to easily identify the target swarm worker. 00:05:18.971 container="$(< /etc/hostname) ($agent)" 00:05:18.971 else 00:05:18.971 # Fallback 00:05:18.971 container=$agent 00:05:18.971 fi 00:05:18.971 fi 00:05:18.971 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:05:18.971 00:05:19.242 [Pipeline] } 00:05:19.261 [Pipeline] // withEnv 00:05:19.269 [Pipeline] setCustomBuildProperty 00:05:19.284 [Pipeline] stage 00:05:19.286 [Pipeline] { (Tests) 00:05:19.306 [Pipeline] sh 00:05:19.587 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:05:19.860 [Pipeline] timeout 00:05:19.860 Timeout set to expire in 30 min 00:05:19.862 [Pipeline] { 00:05:19.878 [Pipeline] sh 00:05:20.158 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:05:20.724 HEAD is now at 26d44a121 trace: rename owner to owner_type 00:05:20.737 [Pipeline] sh 00:05:21.017 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:05:21.290 [Pipeline] sh 00:05:21.573 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:05:21.846 [Pipeline] sh 00:05:22.127 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant ./autoruner.sh spdk_repo 00:05:22.428 ++ readlink -f spdk_repo 00:05:22.428 + DIR_ROOT=/home/vagrant/spdk_repo 00:05:22.428 + [[ -n /home/vagrant/spdk_repo ]] 00:05:22.428 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:05:22.428 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:05:22.428 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:05:22.428 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:05:22.428 + [[ -d /home/vagrant/spdk_repo/output ]] 00:05:22.428 + cd /home/vagrant/spdk_repo 00:05:22.428 + source /etc/os-release 00:05:22.428 ++ NAME='Fedora Linux' 00:05:22.428 ++ VERSION='38 (Cloud Edition)' 00:05:22.428 ++ ID=fedora 00:05:22.428 ++ VERSION_ID=38 00:05:22.428 ++ VERSION_CODENAME= 00:05:22.428 ++ PLATFORM_ID=platform:f38 00:05:22.428 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:05:22.428 ++ ANSI_COLOR='0;38;2;60;110;180' 00:05:22.428 ++ LOGO=fedora-logo-icon 00:05:22.428 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:05:22.428 ++ HOME_URL=https://fedoraproject.org/ 00:05:22.428 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:05:22.428 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:05:22.428 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:05:22.428 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:05:22.428 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:05:22.428 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:05:22.429 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:05:22.429 ++ SUPPORT_END=2024-05-14 00:05:22.429 ++ VARIANT='Cloud Edition' 00:05:22.429 ++ VARIANT_ID=cloud 00:05:22.429 + uname -a 00:05:22.429 Linux fedora38-cloud-1701806725-069-updated-1701632595 6.5.12-200.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Sun Dec 3 20:08:38 UTC 2023 x86_64 GNU/Linux 00:05:22.429 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:22.995 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:22.995 Hugepages 00:05:22.995 node hugesize free / total 00:05:22.995 node0 1048576kB 0 / 0 00:05:22.995 node0 2048kB 0 / 0 00:05:22.995 00:05:22.995 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:22.995 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:22.995 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:22.995 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:22.995 + rm -f /tmp/spdk-ld-path 00:05:22.995 + source autorun-spdk.conf 00:05:22.995 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:22.995 ++ SPDK_TEST_NVMF=1 00:05:22.995 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:22.995 ++ SPDK_TEST_URING=1 00:05:22.995 ++ SPDK_TEST_USDT=1 00:05:22.995 ++ SPDK_RUN_UBSAN=1 00:05:22.995 ++ NET_TYPE=virt 00:05:22.995 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:05:22.995 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:05:22.995 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:22.995 ++ RUN_NIGHTLY=1 00:05:22.995 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:05:22.995 + [[ -n '' ]] 00:05:22.995 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:05:22.995 + for M in /var/spdk/build-*-manifest.txt 00:05:22.995 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:05:22.995 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:05:22.995 + for M in /var/spdk/build-*-manifest.txt 00:05:22.995 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:05:22.995 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:05:22.995 + for M in /var/spdk/build-*-manifest.txt 00:05:22.995 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:05:22.995 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:05:22.995 ++ uname 00:05:22.995 + [[ Linux == \L\i\n\u\x ]] 00:05:22.995 + sudo dmesg -T 00:05:22.995 + sudo dmesg --clear 00:05:22.995 + dmesg_pid=5775 00:05:22.995 + [[ Fedora Linux == FreeBSD ]] 00:05:22.995 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:22.995 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:22.995 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:05:22.995 + [[ -x /usr/src/fio-static/fio ]] 00:05:22.995 + sudo dmesg -Tw 00:05:22.995 + export FIO_BIN=/usr/src/fio-static/fio 00:05:22.995 + FIO_BIN=/usr/src/fio-static/fio 00:05:22.995 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:05:22.995 + [[ ! -v VFIO_QEMU_BIN ]] 00:05:22.995 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:05:22.995 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:22.995 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:22.995 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:05:22.995 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:22.995 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:22.995 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:23.254 Test configuration: 00:05:23.254 SPDK_RUN_FUNCTIONAL_TEST=1 00:05:23.254 SPDK_TEST_NVMF=1 00:05:23.254 SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:23.254 SPDK_TEST_URING=1 00:05:23.254 SPDK_TEST_USDT=1 00:05:23.254 SPDK_RUN_UBSAN=1 00:05:23.254 NET_TYPE=virt 00:05:23.254 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:05:23.254 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:05:23.254 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:23.254 RUN_NIGHTLY=1 15:57:53 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:23.254 15:57:53 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:05:23.254 15:57:53 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:23.254 15:57:53 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:23.254 15:57:53 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.254 15:57:53 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.254 15:57:53 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.254 15:57:53 -- paths/export.sh@5 -- $ export PATH 00:05:23.254 15:57:53 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.254 15:57:53 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:05:23.254 15:57:53 -- common/autobuild_common.sh@435 -- $ date +%s 00:05:23.254 15:57:53 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713196673.XXXXXX 00:05:23.254 15:57:53 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713196673.8kBrIk 00:05:23.254 15:57:53 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:05:23.254 15:57:53 -- common/autobuild_common.sh@441 -- $ '[' -n v22.11.4 ']' 00:05:23.254 15:57:53 -- common/autobuild_common.sh@442 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:05:23.254 15:57:53 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:05:23.254 15:57:53 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:05:23.254 15:57:53 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:05:23.254 15:57:53 -- common/autobuild_common.sh@451 -- $ get_config_params 00:05:23.254 15:57:53 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:05:23.254 15:57:53 -- common/autotest_common.sh@10 -- $ set +x 00:05:23.254 15:57:53 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:05:23.254 15:57:53 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:05:23.254 15:57:53 -- pm/common@17 -- $ local monitor 00:05:23.254 15:57:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:23.254 15:57:53 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=5811 00:05:23.254 15:57:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:23.254 15:57:53 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=5813 00:05:23.254 15:57:53 -- pm/common@26 -- $ sleep 1 00:05:23.254 15:57:53 -- pm/common@21 -- $ date +%s 00:05:23.254 15:57:53 -- pm/common@21 -- $ date +%s 00:05:23.254 15:57:53 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1713196673 00:05:23.254 15:57:53 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1713196673 00:05:23.254 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1713196673_collect-vmstat.pm.log 00:05:23.255 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1713196673_collect-cpu-load.pm.log 00:05:24.209 15:57:54 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:05:24.209 15:57:54 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:05:24.209 15:57:54 -- spdk/autobuild.sh@12 -- $ umask 022 00:05:24.209 15:57:54 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:05:24.209 15:57:54 -- spdk/autobuild.sh@16 -- $ date -u 00:05:24.209 Mon Apr 15 03:57:54 PM UTC 2024 00:05:24.209 15:57:54 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:05:24.209 v24.05-pre-385-g26d44a121 00:05:24.209 15:57:54 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:05:24.209 15:57:54 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:05:24.209 15:57:54 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:05:24.209 15:57:54 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:05:24.209 15:57:54 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:05:24.209 15:57:54 -- common/autotest_common.sh@10 -- $ set +x 00:05:24.209 ************************************ 00:05:24.209 START TEST ubsan 00:05:24.209 ************************************ 00:05:24.209 using ubsan 00:05:24.209 15:57:54 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:05:24.209 00:05:24.209 real 0m0.000s 00:05:24.209 user 0m0.000s 00:05:24.209 sys 0m0.000s 00:05:24.209 15:57:54 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:05:24.209 15:57:54 -- common/autotest_common.sh@10 -- $ set +x 00:05:24.209 ************************************ 00:05:24.209 END TEST ubsan 00:05:24.209 ************************************ 00:05:24.468 15:57:54 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:05:24.468 15:57:54 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:05:24.468 15:57:54 -- common/autobuild_common.sh@427 -- $ run_test build_native_dpdk _build_native_dpdk 00:05:24.468 15:57:54 -- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']' 00:05:24.468 15:57:54 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:05:24.468 15:57:54 -- common/autotest_common.sh@10 -- $ set +x 00:05:24.468 ************************************ 00:05:24.468 START TEST build_native_dpdk 00:05:24.468 ************************************ 00:05:24.468 15:57:54 -- common/autotest_common.sh@1111 -- $ _build_native_dpdk 00:05:24.468 15:57:54 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:05:24.468 15:57:54 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:05:24.468 15:57:54 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:05:24.468 15:57:54 -- common/autobuild_common.sh@51 -- $ local compiler 00:05:24.468 15:57:54 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:05:24.468 15:57:54 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:05:24.468 15:57:54 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:05:24.468 15:57:54 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:05:24.468 15:57:54 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:05:24.468 15:57:54 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:05:24.468 15:57:54 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:05:24.468 15:57:54 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:05:24.468 15:57:54 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:05:24.468 15:57:54 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:05:24.468 15:57:54 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:05:24.468 15:57:54 -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:05:24.468 15:57:54 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:05:24.468 15:57:54 -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:05:24.468 15:57:54 -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:05:24.468 15:57:54 -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:05:24.468 caf0f5d395 version: 22.11.4 00:05:24.468 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:05:24.468 dc9c799c7d vhost: fix missing spinlock unlock 00:05:24.468 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:05:24.468 6ef77f2a5e net/gve: fix RX buffer size alignment 00:05:24.468 15:57:54 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:05:24.468 15:57:54 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:05:24.468 15:57:54 -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:05:24.468 15:57:54 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:05:24.468 15:57:54 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:05:24.468 15:57:54 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:05:24.468 15:57:54 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:05:24.468 15:57:54 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:05:24.468 15:57:54 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:05:24.468 15:57:54 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:05:24.468 15:57:54 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:05:24.468 15:57:54 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:05:24.468 15:57:54 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:05:24.468 15:57:54 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:05:24.468 15:57:54 -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:05:24.468 15:57:54 -- common/autobuild_common.sh@168 -- $ uname -s 00:05:24.468 15:57:54 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:05:24.468 15:57:54 -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:05:24.468 15:57:54 -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:05:24.468 15:57:54 -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:05:24.468 15:57:54 -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:05:24.468 15:57:54 -- scripts/common.sh@333 -- $ IFS=.-: 00:05:24.468 15:57:54 -- scripts/common.sh@333 -- $ read -ra ver1 00:05:24.468 15:57:54 -- scripts/common.sh@334 -- $ IFS=.-: 00:05:24.468 15:57:54 -- scripts/common.sh@334 -- $ read -ra ver2 00:05:24.468 15:57:54 -- scripts/common.sh@335 -- $ local 'op=<' 00:05:24.468 15:57:54 -- scripts/common.sh@337 -- $ ver1_l=3 00:05:24.468 15:57:54 -- scripts/common.sh@338 -- $ ver2_l=3 00:05:24.468 15:57:54 -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:05:24.468 15:57:54 -- scripts/common.sh@341 -- $ case "$op" in 00:05:24.468 15:57:54 -- scripts/common.sh@342 -- $ : 1 00:05:24.468 15:57:54 -- scripts/common.sh@361 -- $ (( v = 0 )) 00:05:24.468 15:57:54 -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.468 15:57:54 -- scripts/common.sh@362 -- $ decimal 22 00:05:24.468 15:57:54 -- scripts/common.sh@350 -- $ local d=22 00:05:24.468 15:57:54 -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:05:24.468 15:57:54 -- scripts/common.sh@352 -- $ echo 22 00:05:24.468 15:57:54 -- scripts/common.sh@362 -- $ ver1[v]=22 00:05:24.468 15:57:54 -- scripts/common.sh@363 -- $ decimal 21 00:05:24.468 15:57:54 -- scripts/common.sh@350 -- $ local d=21 00:05:24.468 15:57:54 -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:05:24.468 15:57:54 -- scripts/common.sh@352 -- $ echo 21 00:05:24.468 15:57:54 -- scripts/common.sh@363 -- $ ver2[v]=21 00:05:24.468 15:57:54 -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:05:24.468 15:57:54 -- scripts/common.sh@364 -- $ return 1 00:05:24.468 15:57:54 -- common/autobuild_common.sh@173 -- $ patch -p1 00:05:24.469 patching file config/rte_config.h 00:05:24.469 Hunk #1 succeeded at 60 (offset 1 line). 00:05:24.469 15:57:54 -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:05:24.469 15:57:54 -- common/autobuild_common.sh@178 -- $ uname -s 00:05:24.469 15:57:54 -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:05:24.469 15:57:54 -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:05:24.469 15:57:54 -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:05:29.737 The Meson build system 00:05:29.737 Version: 1.3.0 00:05:29.737 Source dir: /home/vagrant/spdk_repo/dpdk 00:05:29.737 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:05:29.737 Build type: native build 00:05:29.737 Program cat found: YES (/usr/bin/cat) 00:05:29.737 Project name: DPDK 00:05:29.737 Project version: 22.11.4 00:05:29.737 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:05:29.737 C linker for the host machine: gcc ld.bfd 2.39-16 00:05:29.737 Host machine cpu family: x86_64 00:05:29.737 Host machine cpu: x86_64 00:05:29.737 Message: ## Building in Developer Mode ## 00:05:29.737 Program pkg-config found: YES (/usr/bin/pkg-config) 00:05:29.737 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:05:29.737 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:05:29.737 Program objdump found: YES (/usr/bin/objdump) 00:05:29.737 Program python3 found: YES (/usr/bin/python3) 00:05:29.737 Program cat found: YES (/usr/bin/cat) 00:05:29.737 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:05:29.737 Checking for size of "void *" : 8 00:05:29.737 Checking for size of "void *" : 8 (cached) 00:05:29.737 Library m found: YES 00:05:29.737 Library numa found: YES 00:05:29.737 Has header "numaif.h" : YES 00:05:29.737 Library fdt found: NO 00:05:29.737 Library execinfo found: NO 00:05:29.737 Has header "execinfo.h" : YES 00:05:29.737 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:05:29.737 Run-time dependency libarchive found: NO (tried pkgconfig) 00:05:29.737 Run-time dependency libbsd found: NO (tried pkgconfig) 00:05:29.737 Run-time dependency jansson found: NO (tried pkgconfig) 00:05:29.737 Run-time dependency openssl found: YES 3.0.9 00:05:29.737 Run-time dependency libpcap found: YES 1.10.4 00:05:29.737 Has header "pcap.h" with dependency libpcap: YES 00:05:29.737 Compiler for C supports arguments -Wcast-qual: YES 00:05:29.737 Compiler for C supports arguments -Wdeprecated: YES 00:05:29.737 Compiler for C supports arguments -Wformat: YES 00:05:29.737 Compiler for C supports arguments -Wformat-nonliteral: NO 00:05:29.737 Compiler for C supports arguments -Wformat-security: NO 00:05:29.737 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:29.737 Compiler for C supports arguments -Wmissing-prototypes: YES 00:05:29.737 Compiler for C supports arguments -Wnested-externs: YES 00:05:29.737 Compiler for C supports arguments -Wold-style-definition: YES 00:05:29.737 Compiler for C supports arguments -Wpointer-arith: YES 00:05:29.737 Compiler for C supports arguments -Wsign-compare: YES 00:05:29.737 Compiler for C supports arguments -Wstrict-prototypes: YES 00:05:29.737 Compiler for C supports arguments -Wundef: YES 00:05:29.737 Compiler for C supports arguments -Wwrite-strings: YES 00:05:29.737 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:05:29.737 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:05:29.737 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:29.737 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:05:29.737 Compiler for C supports arguments -mavx512f: YES 00:05:29.737 Checking if "AVX512 checking" compiles: YES 00:05:29.737 Fetching value of define "__SSE4_2__" : 1 00:05:29.737 Fetching value of define "__AES__" : 1 00:05:29.737 Fetching value of define "__AVX__" : 1 00:05:29.737 Fetching value of define "__AVX2__" : 1 00:05:29.737 Fetching value of define "__AVX512BW__" : 1 00:05:29.737 Fetching value of define "__AVX512CD__" : 1 00:05:29.737 Fetching value of define "__AVX512DQ__" : 1 00:05:29.737 Fetching value of define "__AVX512F__" : 1 00:05:29.737 Fetching value of define "__AVX512VL__" : 1 00:05:29.737 Fetching value of define "__PCLMUL__" : 1 00:05:29.737 Fetching value of define "__RDRND__" : 1 00:05:29.737 Fetching value of define "__RDSEED__" : 1 00:05:29.737 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:05:29.737 Compiler for C supports arguments -Wno-format-truncation: YES 00:05:29.737 Message: lib/kvargs: Defining dependency "kvargs" 00:05:29.737 Message: lib/telemetry: Defining dependency "telemetry" 00:05:29.737 Checking for function "getentropy" : YES 00:05:29.737 Message: lib/eal: Defining dependency "eal" 00:05:29.737 Message: lib/ring: Defining dependency "ring" 00:05:29.737 Message: lib/rcu: Defining dependency "rcu" 00:05:29.737 Message: lib/mempool: Defining dependency "mempool" 00:05:29.737 Message: lib/mbuf: Defining dependency "mbuf" 00:05:29.737 Fetching value of define "__PCLMUL__" : 1 (cached) 00:05:29.737 Fetching value of define "__AVX512F__" : 1 (cached) 00:05:29.737 Fetching value of define "__AVX512BW__" : 1 (cached) 00:05:29.737 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:05:29.737 Fetching value of define "__AVX512VL__" : 1 (cached) 00:05:29.737 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:05:29.737 Compiler for C supports arguments -mpclmul: YES 00:05:29.737 Compiler for C supports arguments -maes: YES 00:05:29.737 Compiler for C supports arguments -mavx512f: YES (cached) 00:05:29.737 Compiler for C supports arguments -mavx512bw: YES 00:05:29.737 Compiler for C supports arguments -mavx512dq: YES 00:05:29.737 Compiler for C supports arguments -mavx512vl: YES 00:05:29.737 Compiler for C supports arguments -mvpclmulqdq: YES 00:05:29.737 Compiler for C supports arguments -mavx2: YES 00:05:29.737 Compiler for C supports arguments -mavx: YES 00:05:29.737 Message: lib/net: Defining dependency "net" 00:05:29.737 Message: lib/meter: Defining dependency "meter" 00:05:29.737 Message: lib/ethdev: Defining dependency "ethdev" 00:05:29.737 Message: lib/pci: Defining dependency "pci" 00:05:29.737 Message: lib/cmdline: Defining dependency "cmdline" 00:05:29.737 Message: lib/metrics: Defining dependency "metrics" 00:05:29.737 Message: lib/hash: Defining dependency "hash" 00:05:29.737 Message: lib/timer: Defining dependency "timer" 00:05:29.737 Fetching value of define "__AVX2__" : 1 (cached) 00:05:29.737 Fetching value of define "__AVX512F__" : 1 (cached) 00:05:29.737 Fetching value of define "__AVX512VL__" : 1 (cached) 00:05:29.737 Fetching value of define "__AVX512CD__" : 1 (cached) 00:05:29.737 Fetching value of define "__AVX512BW__" : 1 (cached) 00:05:29.737 Message: lib/acl: Defining dependency "acl" 00:05:29.737 Message: lib/bbdev: Defining dependency "bbdev" 00:05:29.737 Message: lib/bitratestats: Defining dependency "bitratestats" 00:05:29.737 Run-time dependency libelf found: YES 0.190 00:05:29.737 Message: lib/bpf: Defining dependency "bpf" 00:05:29.737 Message: lib/cfgfile: Defining dependency "cfgfile" 00:05:29.737 Message: lib/compressdev: Defining dependency "compressdev" 00:05:29.737 Message: lib/cryptodev: Defining dependency "cryptodev" 00:05:29.737 Message: lib/distributor: Defining dependency "distributor" 00:05:29.737 Message: lib/efd: Defining dependency "efd" 00:05:29.737 Message: lib/eventdev: Defining dependency "eventdev" 00:05:29.737 Message: lib/gpudev: Defining dependency "gpudev" 00:05:29.737 Message: lib/gro: Defining dependency "gro" 00:05:29.737 Message: lib/gso: Defining dependency "gso" 00:05:29.737 Message: lib/ip_frag: Defining dependency "ip_frag" 00:05:29.737 Message: lib/jobstats: Defining dependency "jobstats" 00:05:29.737 Message: lib/latencystats: Defining dependency "latencystats" 00:05:29.737 Message: lib/lpm: Defining dependency "lpm" 00:05:29.737 Fetching value of define "__AVX512F__" : 1 (cached) 00:05:29.737 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:05:29.737 Fetching value of define "__AVX512IFMA__" : (undefined) 00:05:29.737 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:05:29.737 Message: lib/member: Defining dependency "member" 00:05:29.737 Message: lib/pcapng: Defining dependency "pcapng" 00:05:29.737 Compiler for C supports arguments -Wno-cast-qual: YES 00:05:29.737 Message: lib/power: Defining dependency "power" 00:05:29.737 Message: lib/rawdev: Defining dependency "rawdev" 00:05:29.737 Message: lib/regexdev: Defining dependency "regexdev" 00:05:29.737 Message: lib/dmadev: Defining dependency "dmadev" 00:05:29.737 Message: lib/rib: Defining dependency "rib" 00:05:29.737 Message: lib/reorder: Defining dependency "reorder" 00:05:29.737 Message: lib/sched: Defining dependency "sched" 00:05:29.737 Message: lib/security: Defining dependency "security" 00:05:29.737 Message: lib/stack: Defining dependency "stack" 00:05:29.737 Has header "linux/userfaultfd.h" : YES 00:05:29.737 Message: lib/vhost: Defining dependency "vhost" 00:05:29.737 Message: lib/ipsec: Defining dependency "ipsec" 00:05:29.737 Fetching value of define "__AVX512F__" : 1 (cached) 00:05:29.737 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:05:29.737 Fetching value of define "__AVX512BW__" : 1 (cached) 00:05:29.737 Message: lib/fib: Defining dependency "fib" 00:05:29.737 Message: lib/port: Defining dependency "port" 00:05:29.737 Message: lib/pdump: Defining dependency "pdump" 00:05:29.737 Message: lib/table: Defining dependency "table" 00:05:29.737 Message: lib/pipeline: Defining dependency "pipeline" 00:05:29.737 Message: lib/graph: Defining dependency "graph" 00:05:29.737 Message: lib/node: Defining dependency "node" 00:05:29.737 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:05:29.737 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:05:29.737 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:05:29.737 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:05:29.737 Compiler for C supports arguments -Wno-sign-compare: YES 00:05:29.737 Compiler for C supports arguments -Wno-unused-value: YES 00:05:29.737 Compiler for C supports arguments -Wno-format: YES 00:05:29.737 Compiler for C supports arguments -Wno-format-security: YES 00:05:29.737 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:05:29.737 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:05:31.112 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:05:31.112 Compiler for C supports arguments -Wno-unused-parameter: YES 00:05:31.112 Fetching value of define "__AVX2__" : 1 (cached) 00:05:31.112 Fetching value of define "__AVX512F__" : 1 (cached) 00:05:31.112 Fetching value of define "__AVX512BW__" : 1 (cached) 00:05:31.112 Compiler for C supports arguments -mavx512f: YES (cached) 00:05:31.112 Compiler for C supports arguments -mavx512bw: YES (cached) 00:05:31.112 Compiler for C supports arguments -march=skylake-avx512: YES 00:05:31.112 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:05:31.112 Program doxygen found: YES (/usr/bin/doxygen) 00:05:31.112 Configuring doxy-api.conf using configuration 00:05:31.112 Program sphinx-build found: NO 00:05:31.112 Configuring rte_build_config.h using configuration 00:05:31.112 Message: 00:05:31.112 ================= 00:05:31.112 Applications Enabled 00:05:31.112 ================= 00:05:31.112 00:05:31.112 apps: 00:05:31.112 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:05:31.112 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:05:31.112 test-security-perf, 00:05:31.112 00:05:31.112 Message: 00:05:31.112 ================= 00:05:31.112 Libraries Enabled 00:05:31.112 ================= 00:05:31.112 00:05:31.112 libs: 00:05:31.112 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:05:31.112 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:05:31.112 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:05:31.112 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:05:31.112 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:05:31.112 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:05:31.112 table, pipeline, graph, node, 00:05:31.112 00:05:31.112 Message: 00:05:31.112 =============== 00:05:31.112 Drivers Enabled 00:05:31.112 =============== 00:05:31.112 00:05:31.112 common: 00:05:31.112 00:05:31.112 bus: 00:05:31.112 pci, vdev, 00:05:31.112 mempool: 00:05:31.112 ring, 00:05:31.112 dma: 00:05:31.112 00:05:31.112 net: 00:05:31.112 i40e, 00:05:31.112 raw: 00:05:31.112 00:05:31.112 crypto: 00:05:31.112 00:05:31.112 compress: 00:05:31.112 00:05:31.112 regex: 00:05:31.112 00:05:31.112 vdpa: 00:05:31.112 00:05:31.112 event: 00:05:31.112 00:05:31.112 baseband: 00:05:31.112 00:05:31.112 gpu: 00:05:31.112 00:05:31.112 00:05:31.112 Message: 00:05:31.112 ================= 00:05:31.112 Content Skipped 00:05:31.112 ================= 00:05:31.112 00:05:31.112 apps: 00:05:31.112 00:05:31.112 libs: 00:05:31.112 kni: explicitly disabled via build config (deprecated lib) 00:05:31.112 flow_classify: explicitly disabled via build config (deprecated lib) 00:05:31.112 00:05:31.112 drivers: 00:05:31.112 common/cpt: not in enabled drivers build config 00:05:31.112 common/dpaax: not in enabled drivers build config 00:05:31.112 common/iavf: not in enabled drivers build config 00:05:31.112 common/idpf: not in enabled drivers build config 00:05:31.112 common/mvep: not in enabled drivers build config 00:05:31.112 common/octeontx: not in enabled drivers build config 00:05:31.112 bus/auxiliary: not in enabled drivers build config 00:05:31.112 bus/dpaa: not in enabled drivers build config 00:05:31.112 bus/fslmc: not in enabled drivers build config 00:05:31.112 bus/ifpga: not in enabled drivers build config 00:05:31.112 bus/vmbus: not in enabled drivers build config 00:05:31.112 common/cnxk: not in enabled drivers build config 00:05:31.112 common/mlx5: not in enabled drivers build config 00:05:31.112 common/qat: not in enabled drivers build config 00:05:31.112 common/sfc_efx: not in enabled drivers build config 00:05:31.112 mempool/bucket: not in enabled drivers build config 00:05:31.112 mempool/cnxk: not in enabled drivers build config 00:05:31.112 mempool/dpaa: not in enabled drivers build config 00:05:31.112 mempool/dpaa2: not in enabled drivers build config 00:05:31.112 mempool/octeontx: not in enabled drivers build config 00:05:31.112 mempool/stack: not in enabled drivers build config 00:05:31.112 dma/cnxk: not in enabled drivers build config 00:05:31.112 dma/dpaa: not in enabled drivers build config 00:05:31.112 dma/dpaa2: not in enabled drivers build config 00:05:31.112 dma/hisilicon: not in enabled drivers build config 00:05:31.112 dma/idxd: not in enabled drivers build config 00:05:31.112 dma/ioat: not in enabled drivers build config 00:05:31.112 dma/skeleton: not in enabled drivers build config 00:05:31.112 net/af_packet: not in enabled drivers build config 00:05:31.112 net/af_xdp: not in enabled drivers build config 00:05:31.112 net/ark: not in enabled drivers build config 00:05:31.112 net/atlantic: not in enabled drivers build config 00:05:31.112 net/avp: not in enabled drivers build config 00:05:31.112 net/axgbe: not in enabled drivers build config 00:05:31.112 net/bnx2x: not in enabled drivers build config 00:05:31.112 net/bnxt: not in enabled drivers build config 00:05:31.112 net/bonding: not in enabled drivers build config 00:05:31.112 net/cnxk: not in enabled drivers build config 00:05:31.112 net/cxgbe: not in enabled drivers build config 00:05:31.112 net/dpaa: not in enabled drivers build config 00:05:31.112 net/dpaa2: not in enabled drivers build config 00:05:31.112 net/e1000: not in enabled drivers build config 00:05:31.112 net/ena: not in enabled drivers build config 00:05:31.112 net/enetc: not in enabled drivers build config 00:05:31.112 net/enetfec: not in enabled drivers build config 00:05:31.112 net/enic: not in enabled drivers build config 00:05:31.112 net/failsafe: not in enabled drivers build config 00:05:31.112 net/fm10k: not in enabled drivers build config 00:05:31.112 net/gve: not in enabled drivers build config 00:05:31.112 net/hinic: not in enabled drivers build config 00:05:31.112 net/hns3: not in enabled drivers build config 00:05:31.112 net/iavf: not in enabled drivers build config 00:05:31.112 net/ice: not in enabled drivers build config 00:05:31.112 net/idpf: not in enabled drivers build config 00:05:31.112 net/igc: not in enabled drivers build config 00:05:31.112 net/ionic: not in enabled drivers build config 00:05:31.112 net/ipn3ke: not in enabled drivers build config 00:05:31.112 net/ixgbe: not in enabled drivers build config 00:05:31.112 net/kni: not in enabled drivers build config 00:05:31.112 net/liquidio: not in enabled drivers build config 00:05:31.112 net/mana: not in enabled drivers build config 00:05:31.112 net/memif: not in enabled drivers build config 00:05:31.112 net/mlx4: not in enabled drivers build config 00:05:31.112 net/mlx5: not in enabled drivers build config 00:05:31.112 net/mvneta: not in enabled drivers build config 00:05:31.112 net/mvpp2: not in enabled drivers build config 00:05:31.112 net/netvsc: not in enabled drivers build config 00:05:31.112 net/nfb: not in enabled drivers build config 00:05:31.112 net/nfp: not in enabled drivers build config 00:05:31.112 net/ngbe: not in enabled drivers build config 00:05:31.112 net/null: not in enabled drivers build config 00:05:31.112 net/octeontx: not in enabled drivers build config 00:05:31.112 net/octeon_ep: not in enabled drivers build config 00:05:31.112 net/pcap: not in enabled drivers build config 00:05:31.112 net/pfe: not in enabled drivers build config 00:05:31.112 net/qede: not in enabled drivers build config 00:05:31.112 net/ring: not in enabled drivers build config 00:05:31.112 net/sfc: not in enabled drivers build config 00:05:31.112 net/softnic: not in enabled drivers build config 00:05:31.112 net/tap: not in enabled drivers build config 00:05:31.112 net/thunderx: not in enabled drivers build config 00:05:31.112 net/txgbe: not in enabled drivers build config 00:05:31.112 net/vdev_netvsc: not in enabled drivers build config 00:05:31.112 net/vhost: not in enabled drivers build config 00:05:31.112 net/virtio: not in enabled drivers build config 00:05:31.112 net/vmxnet3: not in enabled drivers build config 00:05:31.112 raw/cnxk_bphy: not in enabled drivers build config 00:05:31.112 raw/cnxk_gpio: not in enabled drivers build config 00:05:31.112 raw/dpaa2_cmdif: not in enabled drivers build config 00:05:31.112 raw/ifpga: not in enabled drivers build config 00:05:31.112 raw/ntb: not in enabled drivers build config 00:05:31.112 raw/skeleton: not in enabled drivers build config 00:05:31.112 crypto/armv8: not in enabled drivers build config 00:05:31.112 crypto/bcmfs: not in enabled drivers build config 00:05:31.112 crypto/caam_jr: not in enabled drivers build config 00:05:31.112 crypto/ccp: not in enabled drivers build config 00:05:31.112 crypto/cnxk: not in enabled drivers build config 00:05:31.112 crypto/dpaa_sec: not in enabled drivers build config 00:05:31.112 crypto/dpaa2_sec: not in enabled drivers build config 00:05:31.112 crypto/ipsec_mb: not in enabled drivers build config 00:05:31.112 crypto/mlx5: not in enabled drivers build config 00:05:31.112 crypto/mvsam: not in enabled drivers build config 00:05:31.112 crypto/nitrox: not in enabled drivers build config 00:05:31.112 crypto/null: not in enabled drivers build config 00:05:31.112 crypto/octeontx: not in enabled drivers build config 00:05:31.112 crypto/openssl: not in enabled drivers build config 00:05:31.112 crypto/scheduler: not in enabled drivers build config 00:05:31.112 crypto/uadk: not in enabled drivers build config 00:05:31.112 crypto/virtio: not in enabled drivers build config 00:05:31.112 compress/isal: not in enabled drivers build config 00:05:31.112 compress/mlx5: not in enabled drivers build config 00:05:31.113 compress/octeontx: not in enabled drivers build config 00:05:31.113 compress/zlib: not in enabled drivers build config 00:05:31.113 regex/mlx5: not in enabled drivers build config 00:05:31.113 regex/cn9k: not in enabled drivers build config 00:05:31.113 vdpa/ifc: not in enabled drivers build config 00:05:31.113 vdpa/mlx5: not in enabled drivers build config 00:05:31.113 vdpa/sfc: not in enabled drivers build config 00:05:31.113 event/cnxk: not in enabled drivers build config 00:05:31.113 event/dlb2: not in enabled drivers build config 00:05:31.113 event/dpaa: not in enabled drivers build config 00:05:31.113 event/dpaa2: not in enabled drivers build config 00:05:31.113 event/dsw: not in enabled drivers build config 00:05:31.113 event/opdl: not in enabled drivers build config 00:05:31.113 event/skeleton: not in enabled drivers build config 00:05:31.113 event/sw: not in enabled drivers build config 00:05:31.113 event/octeontx: not in enabled drivers build config 00:05:31.113 baseband/acc: not in enabled drivers build config 00:05:31.113 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:05:31.113 baseband/fpga_lte_fec: not in enabled drivers build config 00:05:31.113 baseband/la12xx: not in enabled drivers build config 00:05:31.113 baseband/null: not in enabled drivers build config 00:05:31.113 baseband/turbo_sw: not in enabled drivers build config 00:05:31.113 gpu/cuda: not in enabled drivers build config 00:05:31.113 00:05:31.113 00:05:31.113 Build targets in project: 311 00:05:31.113 00:05:31.113 DPDK 22.11.4 00:05:31.113 00:05:31.113 User defined options 00:05:31.113 libdir : lib 00:05:31.113 prefix : /home/vagrant/spdk_repo/dpdk/build 00:05:31.113 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:05:31.113 c_link_args : 00:05:31.113 enable_docs : false 00:05:31.113 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:05:31.113 enable_kmods : false 00:05:31.113 machine : native 00:05:31.113 tests : false 00:05:31.113 00:05:31.113 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:31.113 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:05:31.113 15:58:00 -- common/autobuild_common.sh@186 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:05:31.113 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:05:31.113 [1/740] Generating lib/rte_kvargs_mingw with a custom command 00:05:31.113 [2/740] Generating lib/rte_telemetry_def with a custom command 00:05:31.113 [3/740] Generating lib/rte_telemetry_mingw with a custom command 00:05:31.113 [4/740] Generating lib/rte_kvargs_def with a custom command 00:05:31.372 [5/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:05:31.372 [6/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:05:31.372 [7/740] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:05:31.372 [8/740] Linking static target lib/librte_kvargs.a 00:05:31.372 [9/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:05:31.372 [10/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:05:31.372 [11/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:05:31.372 [12/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:05:31.372 [13/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:05:31.372 [14/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:05:31.630 [15/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:05:31.630 [16/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:05:31.630 [17/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:05:31.630 [18/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:05:31.630 [19/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:05:31.630 [20/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:05:31.630 [21/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:05:31.630 [22/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:05:31.630 [23/740] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:05:31.630 [24/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:05:31.630 [25/740] Linking target lib/librte_kvargs.so.23.0 00:05:31.907 [26/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:05:31.907 [27/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:05:31.907 [28/740] Linking static target lib/librte_telemetry.a 00:05:31.907 [29/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:05:31.907 [30/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:05:31.907 [31/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:05:31.907 [32/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:05:31.907 [33/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:05:31.907 [34/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:05:31.907 [35/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:05:31.907 [36/740] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:05:31.907 [37/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:05:31.907 [38/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:05:32.165 [39/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:05:32.165 [40/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:05:32.165 [41/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:05:32.165 [42/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:05:32.165 [43/740] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:05:32.165 [44/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:05:32.422 [45/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:05:32.422 [46/740] Linking target lib/librte_telemetry.so.23.0 00:05:32.422 [47/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:05:32.422 [48/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:05:32.422 [49/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:05:32.422 [50/740] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:05:32.422 [51/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:05:32.422 [52/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:05:32.422 [53/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:05:32.422 [54/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:05:32.422 [55/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:05:32.422 [56/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:05:32.422 [57/740] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:05:32.681 [58/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:05:32.681 [59/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:05:32.681 [60/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:05:32.681 [61/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:05:32.681 [62/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:05:32.681 [63/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:05:32.681 [64/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:05:32.681 [65/740] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:05:32.681 [66/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:05:32.681 [67/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:05:32.681 [68/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:05:32.681 [69/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:05:32.681 [70/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:05:32.681 [71/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:05:32.681 [72/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:05:32.938 [73/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:05:32.939 [74/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:05:32.939 [75/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:05:32.939 [76/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:05:32.939 [77/740] Generating lib/rte_eal_def with a custom command 00:05:32.939 [78/740] Generating lib/rte_eal_mingw with a custom command 00:05:32.939 [79/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:05:32.939 [80/740] Generating lib/rte_ring_def with a custom command 00:05:32.939 [81/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:05:32.939 [82/740] Generating lib/rte_ring_mingw with a custom command 00:05:32.939 [83/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:05:32.939 [84/740] Generating lib/rte_rcu_def with a custom command 00:05:32.939 [85/740] Generating lib/rte_rcu_mingw with a custom command 00:05:32.939 [86/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:05:32.939 [87/740] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:05:32.939 [88/740] Linking static target lib/librte_ring.a 00:05:33.196 [89/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:05:33.196 [90/740] Generating lib/rte_mempool_def with a custom command 00:05:33.196 [91/740] Generating lib/rte_mempool_mingw with a custom command 00:05:33.196 [92/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:05:33.196 [93/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:05:33.454 [94/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:05:33.454 [95/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:05:33.454 [96/740] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:05:33.454 [97/740] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:05:33.454 [98/740] Linking static target lib/librte_eal.a 00:05:33.454 [99/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:05:33.454 [100/740] Generating lib/rte_mbuf_def with a custom command 00:05:33.454 [101/740] Generating lib/rte_mbuf_mingw with a custom command 00:05:33.711 [102/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:05:33.711 [103/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:05:33.711 [104/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:05:33.711 [105/740] Linking static target lib/librte_mempool.a 00:05:33.971 [106/740] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:05:33.971 [107/740] Linking static target lib/librte_rcu.a 00:05:33.971 [108/740] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:05:33.971 [109/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:05:34.228 [110/740] Generating lib/rte_net_def with a custom command 00:05:34.228 [111/740] Generating lib/rte_net_mingw with a custom command 00:05:34.228 [112/740] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:05:34.229 [113/740] Generating lib/rte_meter_def with a custom command 00:05:34.229 [114/740] Generating lib/rte_meter_mingw with a custom command 00:05:34.229 [115/740] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:05:34.229 [116/740] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:05:34.229 [117/740] Linking static target lib/librte_meter.a 00:05:34.229 [118/740] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:05:34.229 [119/740] Linking static target lib/net/libnet_crc_avx512_lib.a 00:05:34.229 [120/740] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:05:34.544 [121/740] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:05:34.544 [122/740] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:05:34.544 [123/740] Linking static target lib/librte_net.a 00:05:34.544 [124/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:05:34.544 [125/740] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:05:34.544 [126/740] Linking static target lib/librte_mbuf.a 00:05:34.801 [127/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:05:34.801 [128/740] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:05:34.801 [129/740] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:05:34.801 [130/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:05:35.059 [131/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:05:35.059 [132/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:05:35.059 [133/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:05:35.317 [134/740] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:05:35.317 [135/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:05:35.317 [136/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:05:35.575 [137/740] Generating lib/rte_ethdev_def with a custom command 00:05:35.575 [138/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:05:35.575 [139/740] Generating lib/rte_ethdev_mingw with a custom command 00:05:35.575 [140/740] Generating lib/rte_pci_def with a custom command 00:05:35.575 [141/740] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:05:35.575 [142/740] Generating lib/rte_pci_mingw with a custom command 00:05:35.575 [143/740] Linking static target lib/librte_pci.a 00:05:35.575 [144/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:05:35.575 [145/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:05:35.833 [146/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:05:35.833 [147/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:05:35.833 [148/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:05:35.833 [149/740] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:35.833 [150/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:05:35.833 [151/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:05:35.833 [152/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:05:36.091 [153/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:05:36.091 [154/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:05:36.091 [155/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:05:36.091 [156/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:05:36.091 [157/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:05:36.091 [158/740] Generating lib/rte_cmdline_def with a custom command 00:05:36.091 [159/740] Generating lib/rte_cmdline_mingw with a custom command 00:05:36.091 [160/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:05:36.091 [161/740] Generating lib/rte_metrics_mingw with a custom command 00:05:36.091 [162/740] Generating lib/rte_metrics_def with a custom command 00:05:36.091 [163/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:05:36.091 [164/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:05:36.091 [165/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:05:36.091 [166/740] Generating lib/rte_hash_def with a custom command 00:05:36.349 [167/740] Linking static target lib/librte_cmdline.a 00:05:36.349 [168/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:05:36.349 [169/740] Generating lib/rte_hash_mingw with a custom command 00:05:36.349 [170/740] Generating lib/rte_timer_def with a custom command 00:05:36.349 [171/740] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:05:36.349 [172/740] Generating lib/rte_timer_mingw with a custom command 00:05:36.349 [173/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:05:36.607 [174/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:05:36.607 [175/740] Linking static target lib/librte_metrics.a 00:05:36.864 [176/740] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:05:36.864 [177/740] Linking static target lib/librte_timer.a 00:05:36.864 [178/740] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:05:37.123 [179/740] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:05:37.123 [180/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:05:37.123 [181/740] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:05:37.123 [182/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:05:37.123 [183/740] Linking static target lib/librte_ethdev.a 00:05:37.123 [184/740] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:05:37.380 [185/740] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:05:37.380 [186/740] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:05:37.380 [187/740] Generating lib/rte_acl_def with a custom command 00:05:37.380 [188/740] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:05:37.380 [189/740] Generating lib/rte_acl_mingw with a custom command 00:05:37.380 [190/740] Generating lib/rte_bbdev_def with a custom command 00:05:37.380 [191/740] Generating lib/rte_bbdev_mingw with a custom command 00:05:37.638 [192/740] Generating lib/rte_bitratestats_def with a custom command 00:05:37.638 [193/740] Generating lib/rte_bitratestats_mingw with a custom command 00:05:37.895 [194/740] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:05:37.895 [195/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:05:37.895 [196/740] Linking static target lib/librte_bitratestats.a 00:05:37.895 [197/740] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:05:38.154 [198/740] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:05:38.154 [199/740] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:05:38.154 [200/740] Linking static target lib/librte_bbdev.a 00:05:38.412 [201/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:05:38.671 [202/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:05:38.671 [203/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:05:38.930 [204/740] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:38.930 [205/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:05:38.930 [206/740] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:05:38.930 [207/740] Linking static target lib/librte_hash.a 00:05:38.930 [208/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:05:39.498 [209/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:05:39.498 [210/740] Generating lib/rte_bpf_def with a custom command 00:05:39.498 [211/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:05:39.498 [212/740] Generating lib/rte_bpf_mingw with a custom command 00:05:39.498 [213/740] Generating lib/rte_cfgfile_def with a custom command 00:05:39.498 [214/740] Generating lib/rte_cfgfile_mingw with a custom command 00:05:39.498 [215/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:05:39.498 [216/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:05:39.793 [217/740] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:05:39.793 [218/740] Linking static target lib/librte_cfgfile.a 00:05:39.793 [219/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:05:39.793 [220/740] Generating lib/rte_compressdev_def with a custom command 00:05:39.793 [221/740] Generating lib/rte_compressdev_mingw with a custom command 00:05:39.793 [222/740] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:05:40.076 [223/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:05:40.076 [224/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:05:40.076 [225/740] Linking static target lib/librte_bpf.a 00:05:40.076 [226/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:05:40.076 [227/740] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:05:40.076 [228/740] Generating lib/rte_cryptodev_def with a custom command 00:05:40.076 [229/740] Generating lib/rte_cryptodev_mingw with a custom command 00:05:40.076 [230/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:05:40.335 [231/740] Linking static target lib/librte_compressdev.a 00:05:40.335 [232/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:05:40.335 [233/740] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:05:40.335 [234/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:05:40.335 [235/740] Generating lib/rte_distributor_def with a custom command 00:05:40.335 [236/740] Generating lib/rte_distributor_mingw with a custom command 00:05:40.594 [237/740] Generating lib/rte_efd_def with a custom command 00:05:40.594 [238/740] Generating lib/rte_efd_mingw with a custom command 00:05:40.594 [239/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:05:40.594 [240/740] Linking static target lib/librte_acl.a 00:05:40.594 [241/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:05:40.852 [242/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:05:40.852 [243/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:05:40.852 [244/740] Linking static target lib/librte_distributor.a 00:05:40.852 [245/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:05:40.852 [246/740] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:05:41.110 [247/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:05:41.369 [248/740] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:05:41.369 [249/740] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:41.627 [250/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:05:41.627 [251/740] Generating lib/rte_eventdev_def with a custom command 00:05:41.627 [252/740] Generating lib/rte_eventdev_mingw with a custom command 00:05:41.885 [253/740] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:05:41.885 [254/740] Linking static target lib/librte_efd.a 00:05:42.143 [255/740] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:05:42.143 [256/740] Generating lib/rte_gpudev_def with a custom command 00:05:42.143 [257/740] Generating lib/rte_gpudev_mingw with a custom command 00:05:42.143 [258/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:05:42.143 [259/740] Linking static target lib/librte_cryptodev.a 00:05:42.402 [260/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:05:42.402 [261/740] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:05:42.402 [262/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:05:42.402 [263/740] Linking static target lib/librte_gpudev.a 00:05:42.402 [264/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:05:42.969 [265/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:05:42.969 [266/740] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:05:42.969 [267/740] Generating lib/rte_gro_def with a custom command 00:05:42.969 [268/740] Generating lib/rte_gro_mingw with a custom command 00:05:42.969 [269/740] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:05:42.969 [270/740] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:05:43.227 [271/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:05:43.227 [272/740] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:43.486 [273/740] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:05:43.486 [274/740] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:43.486 [275/740] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:05:43.486 [276/740] Generating lib/rte_gso_def with a custom command 00:05:43.486 [277/740] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:05:43.486 [278/740] Linking target lib/librte_eal.so.23.0 00:05:43.486 [279/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:05:43.486 [280/740] Generating lib/rte_gso_mingw with a custom command 00:05:43.486 [281/740] Linking static target lib/librte_gro.a 00:05:43.486 [282/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:05:43.486 [283/740] Linking static target lib/librte_eventdev.a 00:05:43.744 [284/740] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:05:43.744 [285/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:05:43.744 [286/740] Linking target lib/librte_ring.so.23.0 00:05:43.744 [287/740] Linking target lib/librte_meter.so.23.0 00:05:43.744 [288/740] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:05:43.744 [289/740] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:05:43.744 [290/740] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:05:43.744 [291/740] Linking target lib/librte_pci.so.23.0 00:05:43.744 [292/740] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:05:43.744 [293/740] Linking target lib/librte_mempool.so.23.0 00:05:44.002 [294/740] Linking target lib/librte_rcu.so.23.0 00:05:44.002 [295/740] Linking target lib/librte_timer.so.23.0 00:05:44.002 [296/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:05:44.002 [297/740] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:05:44.002 [298/740] Linking target lib/librte_acl.so.23.0 00:05:44.002 [299/740] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:05:44.002 [300/740] Linking target lib/librte_cfgfile.so.23.0 00:05:44.002 [301/740] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:05:44.002 [302/740] Linking target lib/librte_mbuf.so.23.0 00:05:44.002 [303/740] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:05:44.002 [304/740] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:05:44.002 [305/740] Linking static target lib/librte_gso.a 00:05:44.260 [306/740] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:05:44.260 [307/740] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:05:44.260 [308/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:05:44.260 [309/740] Linking target lib/librte_net.so.23.0 00:05:44.260 [310/740] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:05:44.260 [311/740] Linking target lib/librte_bbdev.so.23.0 00:05:44.260 [312/740] Linking target lib/librte_compressdev.so.23.0 00:05:44.260 [313/740] Linking target lib/librte_distributor.so.23.0 00:05:44.260 [314/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:05:44.260 [315/740] Generating lib/rte_ip_frag_def with a custom command 00:05:44.260 [316/740] Generating lib/rte_ip_frag_mingw with a custom command 00:05:44.260 [317/740] Linking target lib/librte_gpudev.so.23.0 00:05:44.260 [318/740] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:05:44.260 [319/740] Generating lib/rte_jobstats_def with a custom command 00:05:44.261 [320/740] Generating lib/rte_jobstats_mingw with a custom command 00:05:44.519 [321/740] Linking target lib/librte_cmdline.so.23.0 00:05:44.520 [322/740] Linking target lib/librte_hash.so.23.0 00:05:44.520 [323/740] Linking target lib/librte_ethdev.so.23.0 00:05:44.520 [324/740] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:05:44.520 [325/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:05:44.520 [326/740] Linking static target lib/librte_jobstats.a 00:05:44.520 [327/740] Generating lib/rte_latencystats_def with a custom command 00:05:44.520 [328/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:05:44.520 [329/740] Generating lib/rte_latencystats_mingw with a custom command 00:05:44.520 [330/740] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:05:44.520 [331/740] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:05:44.520 [332/740] Linking target lib/librte_efd.so.23.0 00:05:44.520 [333/740] Linking target lib/librte_metrics.so.23.0 00:05:44.778 [334/740] Linking target lib/librte_bpf.so.23.0 00:05:44.778 [335/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:05:44.778 [336/740] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:05:44.778 [337/740] Linking target lib/librte_gro.so.23.0 00:05:44.778 [338/740] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:05:44.778 [339/740] Linking target lib/librte_bitratestats.so.23.0 00:05:44.778 [340/740] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:05:44.778 [341/740] Generating lib/rte_lpm_mingw with a custom command 00:05:44.778 [342/740] Generating lib/rte_lpm_def with a custom command 00:05:44.778 [343/740] Linking target lib/librte_jobstats.so.23.0 00:05:44.778 [344/740] Linking target lib/librte_gso.so.23.0 00:05:44.778 [345/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:05:44.778 [346/740] Linking static target lib/librte_ip_frag.a 00:05:45.090 [347/740] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:05:45.090 [348/740] Linking static target lib/librte_latencystats.a 00:05:45.348 [349/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:05:45.348 [350/740] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:05:45.348 [351/740] Linking target lib/librte_ip_frag.so.23.0 00:05:45.348 [352/740] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:45.348 [353/740] Linking target lib/librte_cryptodev.so.23.0 00:05:45.348 [354/740] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:05:45.348 [355/740] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:05:45.348 [356/740] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:05:45.348 [357/740] Linking static target lib/member/libsketch_avx512_tmp.a 00:05:45.348 [358/740] Linking target lib/librte_latencystats.so.23.0 00:05:45.348 [359/740] Generating lib/rte_member_def with a custom command 00:05:45.348 [360/740] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:05:45.348 [361/740] Generating lib/rte_pcapng_def with a custom command 00:05:45.348 [362/740] Generating lib/rte_member_mingw with a custom command 00:05:45.348 [363/740] Generating lib/rte_pcapng_mingw with a custom command 00:05:45.607 [364/740] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:05:45.607 [365/740] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:05:45.607 [366/740] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:05:45.607 [367/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:05:45.607 [368/740] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:05:45.607 [369/740] Linking static target lib/librte_lpm.a 00:05:45.865 [370/740] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:05:45.865 [371/740] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:05:45.865 [372/740] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:05:45.865 [373/740] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:05:46.123 [374/740] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:05:46.123 [375/740] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:05:46.123 [376/740] Generating lib/rte_power_def with a custom command 00:05:46.123 [377/740] Generating lib/rte_power_mingw with a custom command 00:05:46.123 [378/740] Linking target lib/librte_lpm.so.23.0 00:05:46.123 [379/740] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:05:46.123 [380/740] Generating lib/rte_rawdev_def with a custom command 00:05:46.123 [381/740] Generating lib/rte_rawdev_mingw with a custom command 00:05:46.123 [382/740] Generating lib/rte_regexdev_def with a custom command 00:05:46.123 [383/740] Generating lib/rte_regexdev_mingw with a custom command 00:05:46.123 [384/740] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:05:46.123 [385/740] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:05:46.123 [386/740] Linking static target lib/librte_pcapng.a 00:05:46.123 [387/740] Generating lib/rte_dmadev_def with a custom command 00:05:46.382 [388/740] Generating lib/rte_dmadev_mingw with a custom command 00:05:46.382 [389/740] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:05:46.382 [390/740] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:46.382 [391/740] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:05:46.382 [392/740] Generating lib/rte_rib_def with a custom command 00:05:46.382 [393/740] Generating lib/rte_rib_mingw with a custom command 00:05:46.382 [394/740] Linking target lib/librte_eventdev.so.23.0 00:05:46.382 [395/740] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:05:46.382 [396/740] Linking static target lib/librte_rawdev.a 00:05:46.641 [397/740] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:05:46.641 [398/740] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:05:46.641 [399/740] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:05:46.641 [400/740] Linking static target lib/librte_dmadev.a 00:05:46.641 [401/740] Generating lib/rte_reorder_def with a custom command 00:05:46.641 [402/740] Linking target lib/librte_pcapng.so.23.0 00:05:46.641 [403/740] Generating lib/rte_reorder_mingw with a custom command 00:05:46.641 [404/740] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:05:46.641 [405/740] Linking static target lib/librte_power.a 00:05:46.641 [406/740] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:05:46.641 [407/740] Linking static target lib/librte_regexdev.a 00:05:46.641 [408/740] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:05:46.900 [409/740] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:05:46.900 [410/740] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:05:46.900 [411/740] Linking static target lib/librte_member.a 00:05:46.900 [412/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:05:47.158 [413/740] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:47.158 [414/740] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:05:47.158 [415/740] Linking target lib/librte_rawdev.so.23.0 00:05:47.158 [416/740] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:05:47.158 [417/740] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:05:47.158 [418/740] Generating lib/rte_sched_def with a custom command 00:05:47.158 [419/740] Linking static target lib/librte_reorder.a 00:05:47.158 [420/740] Generating lib/rte_sched_mingw with a custom command 00:05:47.158 [421/740] Generating lib/rte_security_def with a custom command 00:05:47.158 [422/740] Generating lib/rte_security_mingw with a custom command 00:05:47.158 [423/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:05:47.158 [424/740] Linking static target lib/librte_rib.a 00:05:47.158 [425/740] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:47.158 [426/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:05:47.417 [427/740] Linking target lib/librte_dmadev.so.23.0 00:05:47.417 [428/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:05:47.417 [429/740] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:05:47.417 [430/740] Generating lib/rte_stack_def with a custom command 00:05:47.417 [431/740] Generating lib/rte_stack_mingw with a custom command 00:05:47.417 [432/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:05:47.417 [433/740] Linking target lib/librte_member.so.23.0 00:05:47.417 [434/740] Linking static target lib/librte_stack.a 00:05:47.417 [435/740] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:05:47.417 [436/740] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:05:47.417 [437/740] Linking target lib/librte_reorder.so.23.0 00:05:47.675 [438/740] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:05:47.675 [439/740] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:05:47.675 [440/740] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:47.675 [441/740] Linking target lib/librte_stack.so.23.0 00:05:47.675 [442/740] Linking target lib/librte_regexdev.so.23.0 00:05:47.675 [443/740] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:05:47.675 [444/740] Linking target lib/librte_rib.so.23.0 00:05:47.933 [445/740] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:05:47.933 [446/740] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:05:47.933 [447/740] Linking static target lib/librte_security.a 00:05:47.933 [448/740] Linking target lib/librte_power.so.23.0 00:05:47.933 [449/740] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:05:47.933 [450/740] Generating lib/rte_vhost_def with a custom command 00:05:47.933 [451/740] Generating lib/rte_vhost_mingw with a custom command 00:05:48.191 [452/740] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:05:48.191 [453/740] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:05:48.191 [454/740] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:05:48.450 [455/740] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:05:48.450 [456/740] Linking static target lib/librte_sched.a 00:05:48.450 [457/740] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:05:48.450 [458/740] Linking target lib/librte_security.so.23.0 00:05:48.708 [459/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:05:48.708 [460/740] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:05:48.708 [461/740] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:05:48.708 [462/740] Generating lib/rte_ipsec_def with a custom command 00:05:48.966 [463/740] Generating lib/rte_ipsec_mingw with a custom command 00:05:48.966 [464/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:05:48.966 [465/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:05:48.966 [466/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:05:49.224 [467/740] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:05:49.224 [468/740] Linking target lib/librte_sched.so.23.0 00:05:49.224 [469/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:05:49.483 [470/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:05:49.483 [471/740] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:05:49.483 [472/740] Generating lib/rte_fib_def with a custom command 00:05:49.483 [473/740] Generating lib/rte_fib_mingw with a custom command 00:05:49.483 [474/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:05:49.741 [475/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:05:49.741 [476/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:05:49.741 [477/740] Linking static target lib/librte_ipsec.a 00:05:49.741 [478/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:05:49.998 [479/740] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:05:49.998 [480/740] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:05:49.998 [481/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:05:49.998 [482/740] Linking static target lib/librte_fib.a 00:05:50.256 [483/740] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:05:50.256 [484/740] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:05:50.256 [485/740] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:05:50.514 [486/740] Linking target lib/librte_ipsec.so.23.0 00:05:50.514 [487/740] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:05:50.772 [488/740] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:05:50.772 [489/740] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:05:50.772 [490/740] Linking target lib/librte_fib.so.23.0 00:05:50.772 [491/740] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:05:51.339 [492/740] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:05:51.339 [493/740] Generating lib/rte_port_def with a custom command 00:05:51.339 [494/740] Generating lib/rte_port_mingw with a custom command 00:05:51.339 [495/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:05:51.339 [496/740] Generating lib/rte_pdump_def with a custom command 00:05:51.339 [497/740] Generating lib/rte_pdump_mingw with a custom command 00:05:51.339 [498/740] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:05:51.339 [499/740] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:05:51.339 [500/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:05:51.598 [501/740] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:05:51.598 [502/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:05:51.856 [503/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:05:51.856 [504/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:05:51.856 [505/740] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:05:51.856 [506/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:05:51.856 [507/740] Linking static target lib/librte_port.a 00:05:52.114 [508/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:05:52.114 [509/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:05:52.372 [510/740] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:05:52.372 [511/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:05:52.372 [512/740] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:05:52.372 [513/740] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:05:52.372 [514/740] Linking static target lib/librte_pdump.a 00:05:52.630 [515/740] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:05:52.630 [516/740] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:05:52.889 [517/740] Linking target lib/librte_pdump.so.23.0 00:05:52.889 [518/740] Linking target lib/librte_port.so.23.0 00:05:52.889 [519/740] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:05:52.889 [520/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:05:52.889 [521/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:05:52.889 [522/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:05:52.889 [523/740] Generating lib/rte_table_def with a custom command 00:05:52.889 [524/740] Generating lib/rte_table_mingw with a custom command 00:05:53.147 [525/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:05:53.404 [526/740] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:05:53.404 [527/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:05:53.404 [528/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:05:53.404 [529/740] Generating lib/rte_pipeline_def with a custom command 00:05:53.404 [530/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:05:53.404 [531/740] Generating lib/rte_pipeline_mingw with a custom command 00:05:53.404 [532/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:05:53.664 [533/740] Linking static target lib/librte_table.a 00:05:53.922 [534/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:05:53.922 [535/740] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:05:54.181 [536/740] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:05:54.181 [537/740] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:05:54.181 [538/740] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:05:54.440 [539/740] Linking target lib/librte_table.so.23.0 00:05:54.440 [540/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:05:54.440 [541/740] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:05:54.440 [542/740] Generating lib/rte_graph_def with a custom command 00:05:54.440 [543/740] Generating lib/rte_graph_mingw with a custom command 00:05:54.440 [544/740] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:05:54.699 [545/740] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:05:54.699 [546/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:05:54.699 [547/740] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:05:54.699 [548/740] Linking static target lib/librte_graph.a 00:05:54.958 [549/740] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:05:54.958 [550/740] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:05:54.958 [551/740] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:05:55.216 [552/740] Compiling C object lib/librte_node.a.p/node_null.c.o 00:05:55.475 [553/740] Compiling C object lib/librte_node.a.p/node_log.c.o 00:05:55.475 [554/740] Generating lib/rte_node_def with a custom command 00:05:55.475 [555/740] Generating lib/rte_node_mingw with a custom command 00:05:55.475 [556/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:05:55.475 [557/740] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:05:55.475 [558/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:05:55.734 [559/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:05:55.734 [560/740] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:05:55.734 [561/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:05:55.734 [562/740] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:05:55.734 [563/740] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:05:55.734 [564/740] Generating drivers/rte_bus_pci_def with a custom command 00:05:55.734 [565/740] Generating drivers/rte_bus_pci_mingw with a custom command 00:05:55.993 [566/740] Linking target lib/librte_graph.so.23.0 00:05:55.993 [567/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:05:55.993 [568/740] Generating drivers/rte_bus_vdev_def with a custom command 00:05:55.993 [569/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:05:55.993 [570/740] Generating drivers/rte_bus_vdev_mingw with a custom command 00:05:55.993 [571/740] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:05:55.993 [572/740] Generating drivers/rte_mempool_ring_def with a custom command 00:05:55.993 [573/740] Generating drivers/rte_mempool_ring_mingw with a custom command 00:05:55.993 [574/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:05:55.993 [575/740] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:05:55.993 [576/740] Linking static target lib/librte_node.a 00:05:56.253 [577/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:05:56.253 [578/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:05:56.253 [579/740] Linking static target drivers/libtmp_rte_bus_pci.a 00:05:56.253 [580/740] Linking static target drivers/libtmp_rte_bus_vdev.a 00:05:56.253 [581/740] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:05:56.512 [582/740] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:05:56.512 [583/740] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:56.512 [584/740] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:56.512 [585/740] Linking static target drivers/librte_bus_pci.a 00:05:56.512 [586/740] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:05:56.512 [587/740] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:56.512 [588/740] Linking static target drivers/librte_bus_vdev.a 00:05:56.512 [589/740] Linking target lib/librte_node.so.23.0 00:05:56.512 [590/740] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:56.771 [591/740] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:56.771 [592/740] Linking target drivers/librte_bus_vdev.so.23.0 00:05:56.771 [593/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:05:56.771 [594/740] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:57.030 [595/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:05:57.030 [596/740] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:05:57.030 [597/740] Linking target drivers/librte_bus_pci.so.23.0 00:05:57.030 [598/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:05:57.030 [599/740] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:05:57.030 [600/740] Linking static target drivers/libtmp_rte_mempool_ring.a 00:05:57.030 [601/740] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:05:57.289 [602/740] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:05:57.289 [603/740] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:57.289 [604/740] Linking static target drivers/librte_mempool_ring.a 00:05:57.289 [605/740] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:57.289 [606/740] Linking target drivers/librte_mempool_ring.so.23.0 00:05:57.548 [607/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:05:57.806 [608/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:05:58.063 [609/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:05:58.329 [610/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:05:58.329 [611/740] Linking static target drivers/net/i40e/base/libi40e_base.a 00:05:58.615 [612/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:05:58.874 [613/740] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:05:58.874 [614/740] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:05:59.132 [615/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:05:59.390 [616/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:05:59.651 [617/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:05:59.651 [618/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:05:59.651 [619/740] Generating drivers/rte_net_i40e_def with a custom command 00:05:59.651 [620/740] Generating drivers/rte_net_i40e_mingw with a custom command 00:05:59.651 [621/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:05:59.910 [622/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:06:00.477 [623/740] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:06:00.736 [624/740] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:06:00.736 [625/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:06:00.994 [626/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:06:00.994 [627/740] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:06:01.253 [628/740] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:06:01.253 [629/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:06:01.253 [630/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:06:01.253 [631/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:06:01.253 [632/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:06:01.511 [633/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:06:01.770 [634/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:06:01.770 [635/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:06:01.770 [636/740] Linking static target drivers/libtmp_rte_net_i40e.a 00:06:01.770 [637/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:06:02.335 [638/740] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:06:02.335 [639/740] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:06:02.335 [640/740] Linking static target drivers/librte_net_i40e.a 00:06:02.335 [641/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:06:02.335 [642/740] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:06:02.335 [643/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:06:02.335 [644/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:06:02.593 [645/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:06:02.593 [646/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:06:02.852 [647/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:06:03.110 [648/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:06:03.110 [649/740] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:06:03.368 [650/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:06:03.368 [651/740] Linking target drivers/librte_net_i40e.so.23.0 00:06:03.368 [652/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:06:03.368 [653/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:06:03.625 [654/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:06:03.625 [655/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:06:03.625 [656/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:06:03.883 [657/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:06:03.883 [658/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:06:03.883 [659/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:06:03.883 [660/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:06:04.140 [661/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:06:04.140 [662/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:06:04.140 [663/740] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:06:04.140 [664/740] Linking static target lib/librte_vhost.a 00:06:04.398 [665/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:06:04.398 [666/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:06:04.655 [667/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:06:04.655 [668/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:06:05.221 [669/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:06:05.479 [670/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:06:05.479 [671/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:06:05.738 [672/740] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:06:05.738 [673/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:06:05.738 [674/740] Linking target lib/librte_vhost.so.23.0 00:06:05.738 [675/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:06:05.738 [676/740] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:06:05.997 [677/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:06:05.997 [678/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:06:05.997 [679/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:06:06.255 [680/740] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:06:06.255 [681/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:06:06.255 [682/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:06:06.560 [683/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:06:06.560 [684/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:06:06.560 [685/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:06:06.834 [686/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:06:06.834 [687/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:06:06.834 [688/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:06:06.834 [689/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:06:06.834 [690/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:06:07.401 [691/740] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:06:07.401 [692/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:06:07.401 [693/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:06:07.659 [694/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:06:07.917 [695/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:06:07.917 [696/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:06:08.175 [697/740] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:06:08.432 [698/740] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:06:08.433 [699/740] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:06:08.690 [700/740] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:06:08.690 [701/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:06:08.968 [702/740] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:06:09.225 [703/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:06:09.483 [704/740] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:06:09.483 [705/740] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:06:09.483 [706/740] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:06:09.740 [707/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:06:09.998 [708/740] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:06:10.256 [709/740] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:06:10.256 [710/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:06:10.256 [711/740] Linking static target lib/librte_pipeline.a 00:06:10.514 [712/740] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:06:10.514 [713/740] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:06:10.514 [714/740] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:06:10.773 [715/740] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:06:10.773 [716/740] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:06:10.773 [717/740] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:06:10.773 [718/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:06:10.774 [719/740] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:06:10.774 [720/740] Linking target app/dpdk-dumpcap 00:06:11.031 [721/740] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:06:11.031 [722/740] Linking target app/dpdk-proc-info 00:06:11.289 [723/740] Linking target app/dpdk-pdump 00:06:11.289 [724/740] Linking target app/dpdk-test-acl 00:06:11.289 [725/740] Linking target app/dpdk-test-bbdev 00:06:11.289 [726/740] Linking target app/dpdk-test-compress-perf 00:06:11.548 [727/740] Linking target app/dpdk-test-crypto-perf 00:06:11.548 [728/740] Linking target app/dpdk-test-eventdev 00:06:11.548 [729/740] Linking target app/dpdk-test-cmdline 00:06:11.548 [730/740] Linking target app/dpdk-test-fib 00:06:11.548 [731/740] Linking target app/dpdk-test-flow-perf 00:06:11.548 [732/740] Linking target app/dpdk-test-gpudev 00:06:11.806 [733/740] Linking target app/dpdk-test-pipeline 00:06:11.806 [734/740] Linking target app/dpdk-test-sad 00:06:11.806 [735/740] Linking target app/dpdk-testpmd 00:06:12.064 [736/740] Linking target app/dpdk-test-regex 00:06:12.630 [737/740] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:06:13.196 [738/740] Linking target app/dpdk-test-security-perf 00:06:14.570 [739/740] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:06:14.570 [740/740] Linking target lib/librte_pipeline.so.23.0 00:06:14.570 15:58:44 -- common/autobuild_common.sh@187 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:06:14.570 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:06:14.889 [0/1] Installing files. 00:06:15.161 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:06:15.161 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:06:15.161 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:06:15.161 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:06:15.161 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:06:15.161 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:06:15.161 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:06:15.161 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:06:15.161 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:06:15.161 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:06:15.161 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:06:15.161 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:06:15.162 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:06:15.163 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:06:15.164 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:06:15.165 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:06:15.166 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:06:15.166 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:06:15.166 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:06:15.166 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:06:15.166 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:06:15.166 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:06:15.166 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:06:15.166 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:06:15.166 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:06:15.166 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:06:15.166 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:06:15.166 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:06:15.166 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:06:15.166 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:06:15.166 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:06:15.166 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:06:15.166 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:06:15.166 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:06:15.166 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:06:15.166 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:06:15.166 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:06:15.166 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:06:15.166 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:06:15.166 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:06:15.166 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:06:15.166 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:06:15.166 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:06:15.166 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:06:15.166 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:06:15.166 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:06:15.166 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:06:15.166 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:06:15.166 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:06:15.166 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:06:15.166 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:06:15.166 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:06:15.166 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:06:15.166 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:06:15.166 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:06:15.166 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:06:15.166 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:06:15.166 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:06:15.166 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.166 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.166 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.166 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.166 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.166 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.166 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.166 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.166 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.166 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.166 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.166 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.166 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.166 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.166 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.428 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.429 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.429 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.429 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.429 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.429 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.429 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.429 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.429 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.429 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.429 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.429 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.429 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.429 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.429 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.429 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.429 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.429 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:06:15.429 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.429 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:06:15.429 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.429 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:06:15.429 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.429 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:06:15.429 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:15.429 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:15.429 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:15.429 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:15.429 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:15.429 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:15.429 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:15.429 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:15.429 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:15.429 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:15.429 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:15.429 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:15.429 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:15.429 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:15.429 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:15.429 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:15.429 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.430 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.431 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.432 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.432 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.432 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.432 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.432 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.432 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.432 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.432 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.432 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.432 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.432 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:15.432 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:15.432 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:15.432 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:15.432 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:15.432 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:06:15.432 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:06:15.432 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:06:15.432 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:06:15.432 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:06:15.432 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:06:15.432 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:06:15.432 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:06:15.432 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:06:15.432 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:06:15.432 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:06:15.432 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:06:15.432 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:06:15.432 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:06:15.432 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:06:15.432 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:06:15.432 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:06:15.432 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:06:15.432 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:06:15.432 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:06:15.432 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:06:15.432 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:06:15.432 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:06:15.432 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:06:15.432 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:06:15.432 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:06:15.432 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:06:15.432 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:06:15.432 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:06:15.432 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:06:15.432 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:06:15.432 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:06:15.432 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:06:15.432 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:06:15.432 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:06:15.432 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:06:15.432 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:06:15.432 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:06:15.432 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:06:15.432 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:06:15.432 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:06:15.432 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:06:15.432 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:06:15.432 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:06:15.432 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:06:15.432 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:06:15.432 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:06:15.432 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:06:15.432 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:06:15.432 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:06:15.432 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:06:15.432 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:06:15.432 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:06:15.432 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:06:15.432 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:06:15.432 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:06:15.432 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:06:15.432 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:06:15.432 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:06:15.432 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:06:15.432 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:06:15.432 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:06:15.432 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:06:15.432 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:06:15.432 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:06:15.432 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:06:15.432 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:06:15.432 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:06:15.432 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:06:15.432 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:06:15.432 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:06:15.432 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:06:15.432 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:06:15.432 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:06:15.432 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:06:15.432 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:06:15.432 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:06:15.432 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:06:15.432 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:06:15.432 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:06:15.432 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:06:15.432 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:06:15.432 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:06:15.432 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:06:15.432 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:06:15.432 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:06:15.433 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:06:15.433 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:06:15.433 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:06:15.433 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:06:15.433 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:06:15.433 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:06:15.433 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:06:15.433 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:06:15.433 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:06:15.433 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:06:15.433 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:06:15.433 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:06:15.433 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:06:15.433 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:06:15.433 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:06:15.433 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:06:15.433 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:06:15.433 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:06:15.433 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:06:15.433 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:06:15.433 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:06:15.433 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:06:15.433 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:06:15.433 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:06:15.433 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:06:15.433 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:06:15.433 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:06:15.433 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:06:15.433 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:06:15.433 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:06:15.433 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:06:15.433 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:06:15.433 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:06:15.433 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:06:15.433 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:06:15.433 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:06:15.433 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:06:15.433 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:06:15.433 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:06:15.433 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:06:15.433 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:06:15.433 15:58:45 -- common/autobuild_common.sh@189 -- $ uname -s 00:06:15.691 15:58:45 -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:06:15.691 15:58:45 -- common/autobuild_common.sh@200 -- $ cat 00:06:15.691 15:58:45 -- common/autobuild_common.sh@205 -- $ cd /home/vagrant/spdk_repo/spdk 00:06:15.691 00:06:15.691 real 0m51.118s 00:06:15.691 user 5m30.936s 00:06:15.691 sys 1m9.564s 00:06:15.691 15:58:45 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:06:15.691 ************************************ 00:06:15.691 END TEST build_native_dpdk 00:06:15.691 ************************************ 00:06:15.691 15:58:45 -- common/autotest_common.sh@10 -- $ set +x 00:06:15.691 15:58:45 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:06:15.691 15:58:45 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:06:15.691 15:58:45 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:06:15.691 15:58:45 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:06:15.691 15:58:45 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:06:15.691 15:58:45 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:06:15.691 15:58:45 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:06:15.691 15:58:45 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:06:15.691 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:06:15.950 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:06:15.950 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:06:15.950 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:16.516 Using 'verbs' RDMA provider 00:06:32.344 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:06:47.306 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:06:47.306 Creating mk/config.mk...done. 00:06:47.306 Creating mk/cc.flags.mk...done. 00:06:47.306 Type 'make' to build. 00:06:47.306 15:59:15 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:06:47.306 15:59:15 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:06:47.306 15:59:15 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:06:47.306 15:59:15 -- common/autotest_common.sh@10 -- $ set +x 00:06:47.306 ************************************ 00:06:47.306 START TEST make 00:06:47.306 ************************************ 00:06:47.306 15:59:15 -- common/autotest_common.sh@1111 -- $ make -j10 00:06:47.306 make[1]: Nothing to be done for 'all'. 00:07:13.914 CC lib/log/log.o 00:07:13.914 CC lib/log/log_flags.o 00:07:13.914 CC lib/log/log_deprecated.o 00:07:13.914 CC lib/ut_mock/mock.o 00:07:13.914 CC lib/ut/ut.o 00:07:13.914 LIB libspdk_ut_mock.a 00:07:13.914 SO libspdk_ut_mock.so.6.0 00:07:13.914 LIB libspdk_ut.a 00:07:13.914 LIB libspdk_log.a 00:07:13.914 SYMLINK libspdk_ut_mock.so 00:07:13.914 SO libspdk_ut.so.2.0 00:07:13.914 SO libspdk_log.so.7.0 00:07:13.914 SYMLINK libspdk_ut.so 00:07:13.914 SYMLINK libspdk_log.so 00:07:13.914 CC lib/util/bit_array.o 00:07:13.914 CC lib/util/base64.o 00:07:13.914 CC lib/util/crc16.o 00:07:13.914 CC lib/util/crc32.o 00:07:13.914 CC lib/util/cpuset.o 00:07:13.914 CC lib/util/crc32c.o 00:07:13.914 CC lib/ioat/ioat.o 00:07:13.914 CC lib/dma/dma.o 00:07:13.914 CXX lib/trace_parser/trace.o 00:07:13.914 CC lib/vfio_user/host/vfio_user_pci.o 00:07:13.914 CC lib/util/crc32_ieee.o 00:07:13.914 CC lib/util/crc64.o 00:07:13.914 CC lib/util/dif.o 00:07:13.914 CC lib/util/fd.o 00:07:13.914 CC lib/vfio_user/host/vfio_user.o 00:07:13.914 LIB libspdk_dma.a 00:07:13.914 CC lib/util/file.o 00:07:13.914 CC lib/util/hexlify.o 00:07:13.914 SO libspdk_dma.so.4.0 00:07:13.914 CC lib/util/iov.o 00:07:13.914 SYMLINK libspdk_dma.so 00:07:13.914 CC lib/util/math.o 00:07:13.914 CC lib/util/pipe.o 00:07:13.914 LIB libspdk_ioat.a 00:07:13.914 CC lib/util/strerror_tls.o 00:07:13.914 SO libspdk_ioat.so.7.0 00:07:13.914 LIB libspdk_vfio_user.a 00:07:13.914 CC lib/util/string.o 00:07:13.914 CC lib/util/uuid.o 00:07:13.914 SO libspdk_vfio_user.so.5.0 00:07:13.914 SYMLINK libspdk_ioat.so 00:07:13.914 CC lib/util/fd_group.o 00:07:13.914 CC lib/util/xor.o 00:07:13.914 SYMLINK libspdk_vfio_user.so 00:07:13.914 CC lib/util/zipf.o 00:07:13.914 LIB libspdk_util.a 00:07:13.914 SO libspdk_util.so.9.0 00:07:13.914 LIB libspdk_trace_parser.a 00:07:13.914 SO libspdk_trace_parser.so.5.0 00:07:13.914 SYMLINK libspdk_util.so 00:07:13.914 SYMLINK libspdk_trace_parser.so 00:07:13.914 CC lib/idxd/idxd.o 00:07:13.914 CC lib/idxd/idxd_user.o 00:07:13.914 CC lib/vmd/vmd.o 00:07:13.914 CC lib/env_dpdk/env.o 00:07:13.914 CC lib/vmd/led.o 00:07:13.914 CC lib/env_dpdk/memory.o 00:07:13.914 CC lib/env_dpdk/pci.o 00:07:13.914 CC lib/json/json_parse.o 00:07:13.914 CC lib/conf/conf.o 00:07:13.914 CC lib/rdma/common.o 00:07:13.914 CC lib/rdma/rdma_verbs.o 00:07:13.914 CC lib/env_dpdk/init.o 00:07:13.914 CC lib/json/json_util.o 00:07:13.914 CC lib/json/json_write.o 00:07:13.914 LIB libspdk_conf.a 00:07:13.914 SO libspdk_conf.so.6.0 00:07:13.914 CC lib/env_dpdk/threads.o 00:07:13.914 LIB libspdk_rdma.a 00:07:13.914 CC lib/env_dpdk/pci_ioat.o 00:07:13.914 SO libspdk_rdma.so.6.0 00:07:13.914 SYMLINK libspdk_conf.so 00:07:13.914 CC lib/env_dpdk/pci_virtio.o 00:07:13.914 CC lib/env_dpdk/pci_vmd.o 00:07:13.914 SYMLINK libspdk_rdma.so 00:07:13.914 CC lib/env_dpdk/pci_idxd.o 00:07:13.914 CC lib/env_dpdk/pci_event.o 00:07:13.914 CC lib/env_dpdk/sigbus_handler.o 00:07:13.914 LIB libspdk_vmd.a 00:07:13.914 CC lib/env_dpdk/pci_dpdk.o 00:07:13.914 LIB libspdk_json.a 00:07:13.914 SO libspdk_vmd.so.6.0 00:07:13.914 SO libspdk_json.so.6.0 00:07:13.914 CC lib/env_dpdk/pci_dpdk_2207.o 00:07:13.914 LIB libspdk_idxd.a 00:07:13.914 CC lib/env_dpdk/pci_dpdk_2211.o 00:07:13.914 SYMLINK libspdk_vmd.so 00:07:13.914 SO libspdk_idxd.so.12.0 00:07:13.914 SYMLINK libspdk_json.so 00:07:13.914 SYMLINK libspdk_idxd.so 00:07:13.914 CC lib/jsonrpc/jsonrpc_server.o 00:07:13.914 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:07:13.914 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:07:13.914 CC lib/jsonrpc/jsonrpc_client.o 00:07:13.914 LIB libspdk_jsonrpc.a 00:07:13.914 SO libspdk_jsonrpc.so.6.0 00:07:13.914 LIB libspdk_env_dpdk.a 00:07:13.914 SYMLINK libspdk_jsonrpc.so 00:07:13.914 SO libspdk_env_dpdk.so.14.0 00:07:13.914 CC lib/rpc/rpc.o 00:07:13.914 SYMLINK libspdk_env_dpdk.so 00:07:13.914 LIB libspdk_rpc.a 00:07:14.194 SO libspdk_rpc.so.6.0 00:07:14.194 SYMLINK libspdk_rpc.so 00:07:14.452 CC lib/keyring/keyring.o 00:07:14.452 CC lib/keyring/keyring_rpc.o 00:07:14.452 CC lib/notify/notify.o 00:07:14.452 CC lib/notify/notify_rpc.o 00:07:14.452 CC lib/trace/trace.o 00:07:14.452 CC lib/trace/trace_rpc.o 00:07:14.452 CC lib/trace/trace_flags.o 00:07:14.711 LIB libspdk_notify.a 00:07:14.711 LIB libspdk_keyring.a 00:07:14.711 SO libspdk_notify.so.6.0 00:07:14.711 LIB libspdk_trace.a 00:07:14.711 SO libspdk_keyring.so.1.0 00:07:14.711 SO libspdk_trace.so.10.0 00:07:14.711 SYMLINK libspdk_notify.so 00:07:14.711 SYMLINK libspdk_keyring.so 00:07:14.711 SYMLINK libspdk_trace.so 00:07:14.969 CC lib/sock/sock_rpc.o 00:07:14.969 CC lib/sock/sock.o 00:07:14.969 CC lib/thread/thread.o 00:07:14.969 CC lib/thread/iobuf.o 00:07:15.534 LIB libspdk_sock.a 00:07:15.534 SO libspdk_sock.so.9.0 00:07:15.534 SYMLINK libspdk_sock.so 00:07:15.792 CC lib/nvme/nvme_ctrlr.o 00:07:15.792 CC lib/nvme/nvme_ctrlr_cmd.o 00:07:15.792 CC lib/nvme/nvme_fabric.o 00:07:15.792 CC lib/nvme/nvme_ns_cmd.o 00:07:15.792 CC lib/nvme/nvme_ns.o 00:07:15.792 CC lib/nvme/nvme_pcie_common.o 00:07:15.792 CC lib/nvme/nvme_pcie.o 00:07:15.792 CC lib/nvme/nvme.o 00:07:15.792 CC lib/nvme/nvme_qpair.o 00:07:16.725 CC lib/nvme/nvme_quirks.o 00:07:16.725 CC lib/nvme/nvme_transport.o 00:07:16.725 CC lib/nvme/nvme_discovery.o 00:07:16.984 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:07:16.984 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:07:16.984 CC lib/nvme/nvme_tcp.o 00:07:16.984 CC lib/nvme/nvme_opal.o 00:07:16.984 CC lib/nvme/nvme_io_msg.o 00:07:16.984 LIB libspdk_thread.a 00:07:17.242 SO libspdk_thread.so.10.0 00:07:17.242 CC lib/nvme/nvme_poll_group.o 00:07:17.242 SYMLINK libspdk_thread.so 00:07:17.242 CC lib/nvme/nvme_zns.o 00:07:17.500 CC lib/nvme/nvme_stubs.o 00:07:17.500 CC lib/nvme/nvme_auth.o 00:07:17.500 CC lib/nvme/nvme_cuse.o 00:07:17.775 CC lib/nvme/nvme_rdma.o 00:07:17.775 CC lib/accel/accel.o 00:07:17.775 CC lib/accel/accel_rpc.o 00:07:17.775 CC lib/accel/accel_sw.o 00:07:18.071 CC lib/blob/blobstore.o 00:07:18.071 CC lib/init/json_config.o 00:07:18.071 CC lib/init/subsystem.o 00:07:18.071 CC lib/virtio/virtio.o 00:07:18.329 CC lib/virtio/virtio_vhost_user.o 00:07:18.329 CC lib/init/subsystem_rpc.o 00:07:18.329 CC lib/init/rpc.o 00:07:18.329 CC lib/virtio/virtio_vfio_user.o 00:07:18.329 CC lib/virtio/virtio_pci.o 00:07:18.329 CC lib/blob/request.o 00:07:18.587 CC lib/blob/zeroes.o 00:07:18.587 CC lib/blob/blob_bs_dev.o 00:07:18.587 LIB libspdk_init.a 00:07:18.587 SO libspdk_init.so.5.0 00:07:18.587 SYMLINK libspdk_init.so 00:07:18.587 LIB libspdk_virtio.a 00:07:18.845 SO libspdk_virtio.so.7.0 00:07:18.845 LIB libspdk_accel.a 00:07:18.845 SYMLINK libspdk_virtio.so 00:07:18.845 SO libspdk_accel.so.15.0 00:07:18.845 CC lib/event/app.o 00:07:18.845 CC lib/event/reactor.o 00:07:18.845 CC lib/event/log_rpc.o 00:07:18.845 CC lib/event/scheduler_static.o 00:07:18.845 CC lib/event/app_rpc.o 00:07:18.845 SYMLINK libspdk_accel.so 00:07:19.102 LIB libspdk_nvme.a 00:07:19.102 CC lib/bdev/bdev.o 00:07:19.102 CC lib/bdev/bdev_rpc.o 00:07:19.102 CC lib/bdev/scsi_nvme.o 00:07:19.102 CC lib/bdev/bdev_zone.o 00:07:19.102 CC lib/bdev/part.o 00:07:19.359 SO libspdk_nvme.so.13.0 00:07:19.359 LIB libspdk_event.a 00:07:19.359 SO libspdk_event.so.13.0 00:07:19.618 SYMLINK libspdk_event.so 00:07:19.618 SYMLINK libspdk_nvme.so 00:07:21.026 LIB libspdk_blob.a 00:07:21.026 SO libspdk_blob.so.11.0 00:07:21.026 SYMLINK libspdk_blob.so 00:07:21.283 CC lib/blobfs/blobfs.o 00:07:21.283 CC lib/blobfs/tree.o 00:07:21.283 CC lib/lvol/lvol.o 00:07:21.850 LIB libspdk_bdev.a 00:07:21.850 SO libspdk_bdev.so.15.0 00:07:22.108 SYMLINK libspdk_bdev.so 00:07:22.108 LIB libspdk_blobfs.a 00:07:22.108 SO libspdk_blobfs.so.10.0 00:07:22.108 CC lib/ublk/ublk.o 00:07:22.108 CC lib/nvmf/ctrlr.o 00:07:22.108 CC lib/nvmf/ctrlr_discovery.o 00:07:22.108 CC lib/ublk/ublk_rpc.o 00:07:22.108 CC lib/nvmf/ctrlr_bdev.o 00:07:22.108 CC lib/nbd/nbd.o 00:07:22.108 CC lib/scsi/dev.o 00:07:22.108 CC lib/ftl/ftl_core.o 00:07:22.368 SYMLINK libspdk_blobfs.so 00:07:22.368 CC lib/ftl/ftl_init.o 00:07:22.368 LIB libspdk_lvol.a 00:07:22.368 SO libspdk_lvol.so.10.0 00:07:22.368 CC lib/ftl/ftl_layout.o 00:07:22.368 SYMLINK libspdk_lvol.so 00:07:22.368 CC lib/scsi/lun.o 00:07:22.368 CC lib/nbd/nbd_rpc.o 00:07:22.630 CC lib/scsi/port.o 00:07:22.630 CC lib/scsi/scsi.o 00:07:22.630 CC lib/nvmf/subsystem.o 00:07:22.630 LIB libspdk_nbd.a 00:07:22.630 CC lib/scsi/scsi_bdev.o 00:07:22.630 SO libspdk_nbd.so.7.0 00:07:22.888 CC lib/ftl/ftl_debug.o 00:07:22.888 LIB libspdk_ublk.a 00:07:22.888 CC lib/scsi/scsi_pr.o 00:07:22.888 SYMLINK libspdk_nbd.so 00:07:22.888 CC lib/scsi/scsi_rpc.o 00:07:22.888 SO libspdk_ublk.so.3.0 00:07:22.888 CC lib/ftl/ftl_io.o 00:07:22.888 CC lib/ftl/ftl_sb.o 00:07:22.888 CC lib/ftl/ftl_l2p.o 00:07:22.888 SYMLINK libspdk_ublk.so 00:07:22.888 CC lib/ftl/ftl_l2p_flat.o 00:07:23.146 CC lib/nvmf/nvmf.o 00:07:23.147 CC lib/scsi/task.o 00:07:23.147 CC lib/ftl/ftl_nv_cache.o 00:07:23.147 CC lib/ftl/ftl_band.o 00:07:23.147 CC lib/ftl/ftl_band_ops.o 00:07:23.147 CC lib/nvmf/nvmf_rpc.o 00:07:23.147 CC lib/ftl/ftl_writer.o 00:07:23.147 CC lib/ftl/ftl_rq.o 00:07:23.405 LIB libspdk_scsi.a 00:07:23.405 SO libspdk_scsi.so.9.0 00:07:23.405 CC lib/nvmf/transport.o 00:07:23.405 CC lib/nvmf/tcp.o 00:07:23.405 SYMLINK libspdk_scsi.so 00:07:23.405 CC lib/nvmf/rdma.o 00:07:23.405 CC lib/ftl/ftl_reloc.o 00:07:23.405 CC lib/ftl/ftl_l2p_cache.o 00:07:24.058 CC lib/ftl/ftl_p2l.o 00:07:24.058 CC lib/ftl/mngt/ftl_mngt.o 00:07:24.058 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:07:24.058 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:07:24.058 CC lib/ftl/mngt/ftl_mngt_startup.o 00:07:24.058 CC lib/ftl/mngt/ftl_mngt_md.o 00:07:24.058 CC lib/ftl/mngt/ftl_mngt_misc.o 00:07:24.058 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:07:24.058 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:07:24.317 CC lib/iscsi/conn.o 00:07:24.317 CC lib/ftl/mngt/ftl_mngt_band.o 00:07:24.317 CC lib/vhost/vhost.o 00:07:24.317 CC lib/vhost/vhost_rpc.o 00:07:24.317 CC lib/vhost/vhost_scsi.o 00:07:24.317 CC lib/vhost/vhost_blk.o 00:07:24.317 CC lib/vhost/rte_vhost_user.o 00:07:24.317 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:07:24.575 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:07:24.834 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:07:24.834 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:07:24.834 CC lib/iscsi/init_grp.o 00:07:24.834 CC lib/iscsi/iscsi.o 00:07:25.092 CC lib/ftl/utils/ftl_conf.o 00:07:25.092 CC lib/ftl/utils/ftl_md.o 00:07:25.092 CC lib/ftl/utils/ftl_mempool.o 00:07:25.092 CC lib/ftl/utils/ftl_bitmap.o 00:07:25.092 CC lib/ftl/utils/ftl_property.o 00:07:25.350 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:07:25.350 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:07:25.350 CC lib/iscsi/md5.o 00:07:25.350 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:07:25.350 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:07:25.350 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:07:25.350 LIB libspdk_vhost.a 00:07:25.608 CC lib/iscsi/param.o 00:07:25.608 SO libspdk_vhost.so.8.0 00:07:25.608 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:07:25.608 CC lib/ftl/upgrade/ftl_sb_v3.o 00:07:25.608 CC lib/ftl/upgrade/ftl_sb_v5.o 00:07:25.608 CC lib/iscsi/portal_grp.o 00:07:25.608 CC lib/iscsi/tgt_node.o 00:07:25.608 CC lib/iscsi/iscsi_subsystem.o 00:07:25.608 LIB libspdk_nvmf.a 00:07:25.608 SYMLINK libspdk_vhost.so 00:07:25.608 CC lib/iscsi/iscsi_rpc.o 00:07:25.866 SO libspdk_nvmf.so.18.0 00:07:25.866 CC lib/iscsi/task.o 00:07:25.866 CC lib/ftl/nvc/ftl_nvc_dev.o 00:07:25.866 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:07:25.866 CC lib/ftl/base/ftl_base_dev.o 00:07:25.866 CC lib/ftl/base/ftl_base_bdev.o 00:07:25.866 CC lib/ftl/ftl_trace.o 00:07:26.125 SYMLINK libspdk_nvmf.so 00:07:26.125 LIB libspdk_ftl.a 00:07:26.383 LIB libspdk_iscsi.a 00:07:26.642 SO libspdk_ftl.so.9.0 00:07:26.642 SO libspdk_iscsi.so.8.0 00:07:26.901 SYMLINK libspdk_iscsi.so 00:07:26.901 SYMLINK libspdk_ftl.so 00:07:27.466 CC module/env_dpdk/env_dpdk_rpc.o 00:07:27.466 CC module/accel/ioat/accel_ioat.o 00:07:27.466 CC module/accel/dsa/accel_dsa.o 00:07:27.466 CC module/scheduler/dynamic/scheduler_dynamic.o 00:07:27.466 CC module/accel/error/accel_error.o 00:07:27.466 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:07:27.466 CC module/sock/posix/posix.o 00:07:27.466 CC module/accel/iaa/accel_iaa.o 00:07:27.466 CC module/keyring/file/keyring.o 00:07:27.466 CC module/blob/bdev/blob_bdev.o 00:07:27.466 LIB libspdk_env_dpdk_rpc.a 00:07:27.724 SO libspdk_env_dpdk_rpc.so.6.0 00:07:27.724 CC module/keyring/file/keyring_rpc.o 00:07:27.724 SYMLINK libspdk_env_dpdk_rpc.so 00:07:27.724 LIB libspdk_scheduler_dpdk_governor.a 00:07:27.724 CC module/accel/iaa/accel_iaa_rpc.o 00:07:27.724 CC module/accel/ioat/accel_ioat_rpc.o 00:07:27.724 CC module/accel/error/accel_error_rpc.o 00:07:27.724 SO libspdk_scheduler_dpdk_governor.so.4.0 00:07:27.724 CC module/accel/dsa/accel_dsa_rpc.o 00:07:27.724 LIB libspdk_scheduler_dynamic.a 00:07:27.724 SO libspdk_scheduler_dynamic.so.4.0 00:07:27.724 SYMLINK libspdk_scheduler_dpdk_governor.so 00:07:27.983 LIB libspdk_accel_iaa.a 00:07:27.983 LIB libspdk_keyring_file.a 00:07:27.983 LIB libspdk_accel_error.a 00:07:27.983 LIB libspdk_accel_ioat.a 00:07:27.983 SYMLINK libspdk_scheduler_dynamic.so 00:07:27.983 LIB libspdk_blob_bdev.a 00:07:27.983 SO libspdk_keyring_file.so.1.0 00:07:27.983 SO libspdk_accel_iaa.so.3.0 00:07:27.983 LIB libspdk_accel_dsa.a 00:07:27.983 SO libspdk_accel_ioat.so.6.0 00:07:27.983 SO libspdk_blob_bdev.so.11.0 00:07:27.983 SO libspdk_accel_error.so.2.0 00:07:27.983 SO libspdk_accel_dsa.so.5.0 00:07:27.983 SYMLINK libspdk_keyring_file.so 00:07:27.983 SYMLINK libspdk_blob_bdev.so 00:07:27.983 SYMLINK libspdk_accel_iaa.so 00:07:27.983 SYMLINK libspdk_accel_error.so 00:07:27.983 SYMLINK libspdk_accel_dsa.so 00:07:27.983 SYMLINK libspdk_accel_ioat.so 00:07:28.241 CC module/sock/uring/uring.o 00:07:28.241 CC module/scheduler/gscheduler/gscheduler.o 00:07:28.241 LIB libspdk_sock_posix.a 00:07:28.241 CC module/bdev/delay/vbdev_delay.o 00:07:28.241 CC module/bdev/null/bdev_null.o 00:07:28.241 CC module/bdev/malloc/bdev_malloc.o 00:07:28.241 CC module/bdev/error/vbdev_error.o 00:07:28.241 CC module/bdev/gpt/gpt.o 00:07:28.241 SO libspdk_sock_posix.so.6.0 00:07:28.241 CC module/blobfs/bdev/blobfs_bdev.o 00:07:28.241 CC module/bdev/lvol/vbdev_lvol.o 00:07:28.499 LIB libspdk_scheduler_gscheduler.a 00:07:28.499 SO libspdk_scheduler_gscheduler.so.4.0 00:07:28.499 SYMLINK libspdk_sock_posix.so 00:07:28.499 CC module/bdev/gpt/vbdev_gpt.o 00:07:28.499 SYMLINK libspdk_scheduler_gscheduler.so 00:07:28.499 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:07:28.499 CC module/bdev/delay/vbdev_delay_rpc.o 00:07:28.499 CC module/bdev/null/bdev_null_rpc.o 00:07:28.757 CC module/bdev/error/vbdev_error_rpc.o 00:07:28.757 CC module/bdev/malloc/bdev_malloc_rpc.o 00:07:28.757 LIB libspdk_bdev_delay.a 00:07:28.757 LIB libspdk_bdev_gpt.a 00:07:28.757 LIB libspdk_blobfs_bdev.a 00:07:28.757 SO libspdk_bdev_gpt.so.6.0 00:07:28.757 CC module/bdev/nvme/bdev_nvme.o 00:07:28.757 SO libspdk_bdev_delay.so.6.0 00:07:28.757 SO libspdk_blobfs_bdev.so.6.0 00:07:28.757 LIB libspdk_bdev_null.a 00:07:28.757 LIB libspdk_bdev_error.a 00:07:28.758 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:07:28.758 SO libspdk_bdev_null.so.6.0 00:07:28.758 SO libspdk_bdev_error.so.6.0 00:07:28.758 SYMLINK libspdk_bdev_gpt.so 00:07:28.758 SYMLINK libspdk_bdev_delay.so 00:07:28.758 LIB libspdk_sock_uring.a 00:07:29.014 LIB libspdk_bdev_malloc.a 00:07:29.014 SYMLINK libspdk_blobfs_bdev.so 00:07:29.014 CC module/bdev/nvme/bdev_nvme_rpc.o 00:07:29.014 CC module/bdev/nvme/nvme_rpc.o 00:07:29.014 SYMLINK libspdk_bdev_null.so 00:07:29.014 SO libspdk_sock_uring.so.5.0 00:07:29.014 SO libspdk_bdev_malloc.so.6.0 00:07:29.014 CC module/bdev/passthru/vbdev_passthru.o 00:07:29.014 SYMLINK libspdk_bdev_error.so 00:07:29.014 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:07:29.014 SYMLINK libspdk_bdev_malloc.so 00:07:29.014 SYMLINK libspdk_sock_uring.so 00:07:29.014 CC module/bdev/nvme/bdev_mdns_client.o 00:07:29.014 CC module/bdev/nvme/vbdev_opal.o 00:07:29.014 CC module/bdev/raid/bdev_raid.o 00:07:29.272 CC module/bdev/nvme/vbdev_opal_rpc.o 00:07:29.272 LIB libspdk_bdev_lvol.a 00:07:29.272 CC module/bdev/split/vbdev_split.o 00:07:29.272 SO libspdk_bdev_lvol.so.6.0 00:07:29.272 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:07:29.272 LIB libspdk_bdev_passthru.a 00:07:29.272 SO libspdk_bdev_passthru.so.6.0 00:07:29.272 SYMLINK libspdk_bdev_lvol.so 00:07:29.530 SYMLINK libspdk_bdev_passthru.so 00:07:29.530 CC module/bdev/zone_block/vbdev_zone_block.o 00:07:29.530 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:07:29.530 CC module/bdev/split/vbdev_split_rpc.o 00:07:29.530 CC module/bdev/uring/bdev_uring.o 00:07:29.530 CC module/bdev/uring/bdev_uring_rpc.o 00:07:29.530 CC module/bdev/aio/bdev_aio.o 00:07:29.530 CC module/bdev/iscsi/bdev_iscsi.o 00:07:29.530 CC module/bdev/ftl/bdev_ftl.o 00:07:29.530 CC module/bdev/ftl/bdev_ftl_rpc.o 00:07:29.530 LIB libspdk_bdev_split.a 00:07:29.788 SO libspdk_bdev_split.so.6.0 00:07:29.788 SYMLINK libspdk_bdev_split.so 00:07:29.788 CC module/bdev/aio/bdev_aio_rpc.o 00:07:29.788 LIB libspdk_bdev_zone_block.a 00:07:29.788 CC module/bdev/raid/bdev_raid_rpc.o 00:07:29.788 SO libspdk_bdev_zone_block.so.6.0 00:07:30.047 SYMLINK libspdk_bdev_zone_block.so 00:07:30.047 CC module/bdev/raid/bdev_raid_sb.o 00:07:30.047 LIB libspdk_bdev_uring.a 00:07:30.047 CC module/bdev/raid/raid0.o 00:07:30.047 LIB libspdk_bdev_ftl.a 00:07:30.047 LIB libspdk_bdev_aio.a 00:07:30.047 SO libspdk_bdev_uring.so.6.0 00:07:30.047 SO libspdk_bdev_ftl.so.6.0 00:07:30.047 SO libspdk_bdev_aio.so.6.0 00:07:30.047 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:07:30.047 CC module/bdev/raid/raid1.o 00:07:30.047 SYMLINK libspdk_bdev_ftl.so 00:07:30.047 SYMLINK libspdk_bdev_uring.so 00:07:30.047 CC module/bdev/raid/concat.o 00:07:30.047 SYMLINK libspdk_bdev_aio.so 00:07:30.047 CC module/bdev/virtio/bdev_virtio_scsi.o 00:07:30.047 CC module/bdev/virtio/bdev_virtio_blk.o 00:07:30.047 CC module/bdev/virtio/bdev_virtio_rpc.o 00:07:30.047 LIB libspdk_bdev_iscsi.a 00:07:30.305 SO libspdk_bdev_iscsi.so.6.0 00:07:30.305 SYMLINK libspdk_bdev_iscsi.so 00:07:30.305 LIB libspdk_bdev_raid.a 00:07:30.305 SO libspdk_bdev_raid.so.6.0 00:07:30.562 SYMLINK libspdk_bdev_raid.so 00:07:30.562 LIB libspdk_bdev_virtio.a 00:07:30.819 SO libspdk_bdev_virtio.so.6.0 00:07:30.819 SYMLINK libspdk_bdev_virtio.so 00:07:31.077 LIB libspdk_bdev_nvme.a 00:07:31.335 SO libspdk_bdev_nvme.so.7.0 00:07:31.335 SYMLINK libspdk_bdev_nvme.so 00:07:31.969 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:07:31.969 CC module/event/subsystems/vmd/vmd.o 00:07:31.969 CC module/event/subsystems/vmd/vmd_rpc.o 00:07:31.969 CC module/event/subsystems/keyring/keyring.o 00:07:31.969 CC module/event/subsystems/sock/sock.o 00:07:31.969 CC module/event/subsystems/iobuf/iobuf.o 00:07:31.969 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:07:31.969 CC module/event/subsystems/scheduler/scheduler.o 00:07:32.225 LIB libspdk_event_vhost_blk.a 00:07:32.225 LIB libspdk_event_vmd.a 00:07:32.225 SO libspdk_event_vhost_blk.so.3.0 00:07:32.225 LIB libspdk_event_iobuf.a 00:07:32.225 LIB libspdk_event_scheduler.a 00:07:32.225 LIB libspdk_event_keyring.a 00:07:32.225 LIB libspdk_event_sock.a 00:07:32.225 SO libspdk_event_vmd.so.6.0 00:07:32.226 SO libspdk_event_scheduler.so.4.0 00:07:32.226 SO libspdk_event_iobuf.so.3.0 00:07:32.226 SO libspdk_event_sock.so.5.0 00:07:32.226 SYMLINK libspdk_event_vhost_blk.so 00:07:32.226 SO libspdk_event_keyring.so.1.0 00:07:32.226 SYMLINK libspdk_event_sock.so 00:07:32.226 SYMLINK libspdk_event_scheduler.so 00:07:32.226 SYMLINK libspdk_event_vmd.so 00:07:32.226 SYMLINK libspdk_event_keyring.so 00:07:32.226 SYMLINK libspdk_event_iobuf.so 00:07:32.787 CC module/event/subsystems/accel/accel.o 00:07:32.787 LIB libspdk_event_accel.a 00:07:33.044 SO libspdk_event_accel.so.6.0 00:07:33.044 SYMLINK libspdk_event_accel.so 00:07:33.302 CC module/event/subsystems/bdev/bdev.o 00:07:33.560 LIB libspdk_event_bdev.a 00:07:33.560 SO libspdk_event_bdev.so.6.0 00:07:33.816 SYMLINK libspdk_event_bdev.so 00:07:34.074 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:07:34.074 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:07:34.074 CC module/event/subsystems/ublk/ublk.o 00:07:34.074 CC module/event/subsystems/nbd/nbd.o 00:07:34.074 CC module/event/subsystems/scsi/scsi.o 00:07:34.074 LIB libspdk_event_nbd.a 00:07:34.074 LIB libspdk_event_ublk.a 00:07:34.333 LIB libspdk_event_scsi.a 00:07:34.333 SO libspdk_event_nbd.so.6.0 00:07:34.333 SO libspdk_event_ublk.so.3.0 00:07:34.333 SO libspdk_event_scsi.so.6.0 00:07:34.333 LIB libspdk_event_nvmf.a 00:07:34.333 SYMLINK libspdk_event_nbd.so 00:07:34.333 SYMLINK libspdk_event_ublk.so 00:07:34.333 SYMLINK libspdk_event_scsi.so 00:07:34.333 SO libspdk_event_nvmf.so.6.0 00:07:34.333 SYMLINK libspdk_event_nvmf.so 00:07:34.611 CC module/event/subsystems/iscsi/iscsi.o 00:07:34.611 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:07:34.867 LIB libspdk_event_vhost_scsi.a 00:07:34.867 LIB libspdk_event_iscsi.a 00:07:34.867 SO libspdk_event_vhost_scsi.so.3.0 00:07:34.867 SO libspdk_event_iscsi.so.6.0 00:07:34.867 SYMLINK libspdk_event_iscsi.so 00:07:34.867 SYMLINK libspdk_event_vhost_scsi.so 00:07:35.124 SO libspdk.so.6.0 00:07:35.124 SYMLINK libspdk.so 00:07:35.382 CC app/spdk_lspci/spdk_lspci.o 00:07:35.382 CC app/spdk_nvme_perf/perf.o 00:07:35.382 CXX app/trace/trace.o 00:07:35.382 CC app/trace_record/trace_record.o 00:07:35.382 CC app/spdk_nvme_identify/identify.o 00:07:35.639 CC app/nvmf_tgt/nvmf_main.o 00:07:35.639 CC app/iscsi_tgt/iscsi_tgt.o 00:07:35.639 CC app/spdk_tgt/spdk_tgt.o 00:07:35.639 CC examples/accel/perf/accel_perf.o 00:07:35.639 LINK spdk_lspci 00:07:35.639 CC test/accel/dif/dif.o 00:07:35.639 LINK spdk_trace_record 00:07:35.897 LINK nvmf_tgt 00:07:35.897 LINK spdk_tgt 00:07:35.897 LINK iscsi_tgt 00:07:35.897 LINK spdk_trace 00:07:36.155 LINK dif 00:07:36.155 CC app/spdk_nvme_discover/discovery_aer.o 00:07:36.155 CC examples/bdev/hello_world/hello_bdev.o 00:07:36.155 LINK accel_perf 00:07:36.155 CC app/spdk_top/spdk_top.o 00:07:36.155 CC app/vhost/vhost.o 00:07:36.155 CC examples/bdev/bdevperf/bdevperf.o 00:07:36.412 LINK spdk_nvme_identify 00:07:36.412 LINK spdk_nvme_discover 00:07:36.412 LINK spdk_nvme_perf 00:07:36.412 CC examples/blob/hello_world/hello_blob.o 00:07:36.412 LINK hello_bdev 00:07:36.412 LINK vhost 00:07:36.670 CC test/app/bdev_svc/bdev_svc.o 00:07:36.670 CC test/app/histogram_perf/histogram_perf.o 00:07:36.670 CC test/bdev/bdevio/bdevio.o 00:07:36.670 LINK hello_blob 00:07:36.670 CC test/app/jsoncat/jsoncat.o 00:07:36.670 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:07:36.670 LINK bdev_svc 00:07:36.670 LINK histogram_perf 00:07:36.927 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:07:36.927 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:07:36.927 LINK jsoncat 00:07:36.927 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:07:36.927 LINK bdevperf 00:07:37.185 LINK bdevio 00:07:37.185 LINK spdk_top 00:07:37.185 CC examples/blob/cli/blobcli.o 00:07:37.185 LINK nvme_fuzz 00:07:37.185 CC test/app/stub/stub.o 00:07:37.185 CC examples/ioat/perf/perf.o 00:07:37.185 CC examples/nvme/hello_world/hello_world.o 00:07:37.442 LINK stub 00:07:37.442 LINK vhost_fuzz 00:07:37.442 CC examples/ioat/verify/verify.o 00:07:37.442 CC app/spdk_dd/spdk_dd.o 00:07:37.442 LINK ioat_perf 00:07:37.442 LINK hello_world 00:07:37.442 CC app/fio/nvme/fio_plugin.o 00:07:37.442 CC examples/sock/hello_world/hello_sock.o 00:07:37.699 LINK blobcli 00:07:37.699 LINK verify 00:07:37.699 CC app/fio/bdev/fio_plugin.o 00:07:37.699 CC examples/nvme/reconnect/reconnect.o 00:07:37.956 CC examples/vmd/lsvmd/lsvmd.o 00:07:37.956 LINK spdk_dd 00:07:37.956 LINK hello_sock 00:07:37.956 CC examples/vmd/led/led.o 00:07:37.956 CC examples/nvmf/nvmf/nvmf.o 00:07:37.956 LINK lsvmd 00:07:37.956 CC examples/nvme/nvme_manage/nvme_manage.o 00:07:38.213 LINK led 00:07:38.213 LINK spdk_nvme 00:07:38.213 LINK reconnect 00:07:38.213 CC examples/util/zipf/zipf.o 00:07:38.213 LINK spdk_bdev 00:07:38.471 LINK nvmf 00:07:38.471 CC examples/nvme/arbitration/arbitration.o 00:07:38.471 CC examples/thread/thread/thread_ex.o 00:07:38.471 CC examples/idxd/perf/perf.o 00:07:38.471 LINK zipf 00:07:38.471 CC examples/nvme/hotplug/hotplug.o 00:07:38.471 LINK nvme_manage 00:07:38.729 CC examples/interrupt_tgt/interrupt_tgt.o 00:07:38.730 LINK iscsi_fuzz 00:07:38.730 CC test/blobfs/mkfs/mkfs.o 00:07:38.730 TEST_HEADER include/spdk/accel.h 00:07:38.730 TEST_HEADER include/spdk/accel_module.h 00:07:38.730 TEST_HEADER include/spdk/assert.h 00:07:38.730 TEST_HEADER include/spdk/barrier.h 00:07:38.730 LINK thread 00:07:38.730 TEST_HEADER include/spdk/base64.h 00:07:38.730 TEST_HEADER include/spdk/bdev.h 00:07:38.730 TEST_HEADER include/spdk/bdev_module.h 00:07:38.730 TEST_HEADER include/spdk/bdev_zone.h 00:07:38.730 TEST_HEADER include/spdk/bit_array.h 00:07:38.730 TEST_HEADER include/spdk/bit_pool.h 00:07:38.730 TEST_HEADER include/spdk/blob_bdev.h 00:07:38.730 TEST_HEADER include/spdk/blobfs_bdev.h 00:07:38.730 TEST_HEADER include/spdk/blobfs.h 00:07:38.730 TEST_HEADER include/spdk/blob.h 00:07:38.730 TEST_HEADER include/spdk/conf.h 00:07:38.730 TEST_HEADER include/spdk/config.h 00:07:38.730 TEST_HEADER include/spdk/cpuset.h 00:07:38.730 TEST_HEADER include/spdk/crc16.h 00:07:38.730 TEST_HEADER include/spdk/crc32.h 00:07:38.730 TEST_HEADER include/spdk/crc64.h 00:07:38.730 LINK arbitration 00:07:38.730 TEST_HEADER include/spdk/dif.h 00:07:38.730 TEST_HEADER include/spdk/dma.h 00:07:38.730 LINK idxd_perf 00:07:38.730 TEST_HEADER include/spdk/endian.h 00:07:38.730 TEST_HEADER include/spdk/env_dpdk.h 00:07:38.730 TEST_HEADER include/spdk/env.h 00:07:38.730 TEST_HEADER include/spdk/event.h 00:07:38.730 TEST_HEADER include/spdk/fd_group.h 00:07:38.730 TEST_HEADER include/spdk/fd.h 00:07:38.730 TEST_HEADER include/spdk/file.h 00:07:38.730 TEST_HEADER include/spdk/ftl.h 00:07:38.730 TEST_HEADER include/spdk/gpt_spec.h 00:07:38.730 LINK hotplug 00:07:38.730 TEST_HEADER include/spdk/hexlify.h 00:07:38.730 TEST_HEADER include/spdk/histogram_data.h 00:07:38.730 TEST_HEADER include/spdk/idxd.h 00:07:38.730 TEST_HEADER include/spdk/idxd_spec.h 00:07:38.730 TEST_HEADER include/spdk/init.h 00:07:38.730 TEST_HEADER include/spdk/ioat.h 00:07:38.730 TEST_HEADER include/spdk/ioat_spec.h 00:07:38.730 TEST_HEADER include/spdk/iscsi_spec.h 00:07:38.730 LINK interrupt_tgt 00:07:38.730 TEST_HEADER include/spdk/json.h 00:07:38.730 TEST_HEADER include/spdk/jsonrpc.h 00:07:38.730 TEST_HEADER include/spdk/keyring.h 00:07:38.730 TEST_HEADER include/spdk/keyring_module.h 00:07:38.730 TEST_HEADER include/spdk/likely.h 00:07:38.730 TEST_HEADER include/spdk/log.h 00:07:38.730 TEST_HEADER include/spdk/lvol.h 00:07:38.730 TEST_HEADER include/spdk/memory.h 00:07:38.730 TEST_HEADER include/spdk/mmio.h 00:07:38.730 TEST_HEADER include/spdk/nbd.h 00:07:38.730 CC examples/nvme/cmb_copy/cmb_copy.o 00:07:38.730 TEST_HEADER include/spdk/notify.h 00:07:38.730 TEST_HEADER include/spdk/nvme.h 00:07:38.730 TEST_HEADER include/spdk/nvme_intel.h 00:07:38.730 TEST_HEADER include/spdk/nvme_ocssd.h 00:07:38.730 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:07:38.730 TEST_HEADER include/spdk/nvme_spec.h 00:07:38.730 TEST_HEADER include/spdk/nvme_zns.h 00:07:38.730 TEST_HEADER include/spdk/nvmf_cmd.h 00:07:38.730 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:07:38.987 TEST_HEADER include/spdk/nvmf.h 00:07:38.987 TEST_HEADER include/spdk/nvmf_spec.h 00:07:38.987 TEST_HEADER include/spdk/nvmf_transport.h 00:07:38.987 TEST_HEADER include/spdk/opal.h 00:07:38.987 TEST_HEADER include/spdk/opal_spec.h 00:07:38.987 TEST_HEADER include/spdk/pci_ids.h 00:07:38.987 TEST_HEADER include/spdk/pipe.h 00:07:38.987 TEST_HEADER include/spdk/queue.h 00:07:38.987 TEST_HEADER include/spdk/reduce.h 00:07:38.987 TEST_HEADER include/spdk/rpc.h 00:07:38.987 TEST_HEADER include/spdk/scheduler.h 00:07:38.987 TEST_HEADER include/spdk/scsi.h 00:07:38.987 TEST_HEADER include/spdk/scsi_spec.h 00:07:38.987 TEST_HEADER include/spdk/sock.h 00:07:38.987 TEST_HEADER include/spdk/stdinc.h 00:07:38.987 TEST_HEADER include/spdk/string.h 00:07:38.987 TEST_HEADER include/spdk/thread.h 00:07:38.987 TEST_HEADER include/spdk/trace.h 00:07:38.987 TEST_HEADER include/spdk/trace_parser.h 00:07:38.987 TEST_HEADER include/spdk/tree.h 00:07:38.987 TEST_HEADER include/spdk/ublk.h 00:07:38.987 TEST_HEADER include/spdk/util.h 00:07:38.987 TEST_HEADER include/spdk/uuid.h 00:07:38.987 TEST_HEADER include/spdk/version.h 00:07:38.987 TEST_HEADER include/spdk/vfio_user_pci.h 00:07:38.987 TEST_HEADER include/spdk/vfio_user_spec.h 00:07:38.987 TEST_HEADER include/spdk/vhost.h 00:07:38.987 TEST_HEADER include/spdk/vmd.h 00:07:38.987 TEST_HEADER include/spdk/xor.h 00:07:38.987 TEST_HEADER include/spdk/zipf.h 00:07:38.987 CXX test/cpp_headers/accel.o 00:07:38.987 CC test/dma/test_dma/test_dma.o 00:07:38.987 LINK mkfs 00:07:38.987 CXX test/cpp_headers/accel_module.o 00:07:38.987 CXX test/cpp_headers/assert.o 00:07:38.987 CXX test/cpp_headers/barrier.o 00:07:38.987 CXX test/cpp_headers/base64.o 00:07:38.987 LINK cmb_copy 00:07:39.245 CC test/env/vtophys/vtophys.o 00:07:39.245 CXX test/cpp_headers/bdev.o 00:07:39.245 CC examples/nvme/abort/abort.o 00:07:39.245 CC test/env/mem_callbacks/mem_callbacks.o 00:07:39.245 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:07:39.245 LINK vtophys 00:07:39.245 CC test/event/reactor/reactor.o 00:07:39.245 CC test/event/event_perf/event_perf.o 00:07:39.245 LINK test_dma 00:07:39.503 CC test/nvme/aer/aer.o 00:07:39.503 CXX test/cpp_headers/bdev_module.o 00:07:39.503 LINK mem_callbacks 00:07:39.503 CC test/lvol/esnap/esnap.o 00:07:39.503 LINK pmr_persistence 00:07:39.503 LINK reactor 00:07:39.503 LINK event_perf 00:07:39.503 CC test/nvme/reset/reset.o 00:07:39.503 LINK abort 00:07:39.503 CXX test/cpp_headers/bdev_zone.o 00:07:39.761 LINK aer 00:07:39.761 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:07:39.761 CC test/nvme/sgl/sgl.o 00:07:39.761 CC test/nvme/e2edp/nvme_dp.o 00:07:39.761 CC test/nvme/overhead/overhead.o 00:07:39.761 CC test/event/reactor_perf/reactor_perf.o 00:07:39.761 CXX test/cpp_headers/bit_array.o 00:07:39.761 CXX test/cpp_headers/bit_pool.o 00:07:40.033 LINK reset 00:07:40.033 LINK env_dpdk_post_init 00:07:40.033 CC test/event/app_repeat/app_repeat.o 00:07:40.033 LINK reactor_perf 00:07:40.033 LINK sgl 00:07:40.033 CXX test/cpp_headers/blob_bdev.o 00:07:40.033 LINK nvme_dp 00:07:40.033 LINK overhead 00:07:40.033 LINK app_repeat 00:07:40.033 CC test/rpc_client/rpc_client_test.o 00:07:40.292 CXX test/cpp_headers/blobfs_bdev.o 00:07:40.292 CC test/env/memory/memory_ut.o 00:07:40.292 CC test/nvme/err_injection/err_injection.o 00:07:40.292 CC test/env/pci/pci_ut.o 00:07:40.292 CC test/nvme/startup/startup.o 00:07:40.292 LINK rpc_client_test 00:07:40.292 CC test/nvme/reserve/reserve.o 00:07:40.550 CXX test/cpp_headers/blobfs.o 00:07:40.550 CC test/event/scheduler/scheduler.o 00:07:40.550 LINK startup 00:07:40.550 LINK err_injection 00:07:40.550 CC test/thread/poller_perf/poller_perf.o 00:07:40.550 LINK reserve 00:07:40.807 CC test/nvme/simple_copy/simple_copy.o 00:07:40.807 CXX test/cpp_headers/blob.o 00:07:40.807 LINK poller_perf 00:07:40.807 LINK scheduler 00:07:40.807 LINK pci_ut 00:07:40.807 LINK memory_ut 00:07:40.807 CC test/nvme/connect_stress/connect_stress.o 00:07:40.807 CXX test/cpp_headers/conf.o 00:07:40.807 CXX test/cpp_headers/config.o 00:07:40.807 CC test/nvme/boot_partition/boot_partition.o 00:07:40.807 LINK simple_copy 00:07:41.064 CC test/nvme/compliance/nvme_compliance.o 00:07:41.064 CXX test/cpp_headers/cpuset.o 00:07:41.064 CC test/nvme/fused_ordering/fused_ordering.o 00:07:41.064 LINK connect_stress 00:07:41.064 CXX test/cpp_headers/crc16.o 00:07:41.064 CC test/nvme/doorbell_aers/doorbell_aers.o 00:07:41.064 LINK boot_partition 00:07:41.064 CC test/nvme/fdp/fdp.o 00:07:41.322 CXX test/cpp_headers/crc32.o 00:07:41.322 CC test/nvme/cuse/cuse.o 00:07:41.322 CXX test/cpp_headers/crc64.o 00:07:41.322 CXX test/cpp_headers/dif.o 00:07:41.322 LINK fused_ordering 00:07:41.322 CXX test/cpp_headers/dma.o 00:07:41.322 LINK doorbell_aers 00:07:41.322 LINK nvme_compliance 00:07:41.322 CXX test/cpp_headers/endian.o 00:07:41.322 CXX test/cpp_headers/env_dpdk.o 00:07:41.579 CXX test/cpp_headers/env.o 00:07:41.579 CXX test/cpp_headers/event.o 00:07:41.579 CXX test/cpp_headers/fd_group.o 00:07:41.579 LINK fdp 00:07:41.579 CXX test/cpp_headers/fd.o 00:07:41.579 CXX test/cpp_headers/file.o 00:07:41.579 CXX test/cpp_headers/ftl.o 00:07:41.579 CXX test/cpp_headers/gpt_spec.o 00:07:41.579 CXX test/cpp_headers/hexlify.o 00:07:41.579 CXX test/cpp_headers/histogram_data.o 00:07:41.579 CXX test/cpp_headers/idxd.o 00:07:41.837 CXX test/cpp_headers/idxd_spec.o 00:07:41.837 CXX test/cpp_headers/init.o 00:07:41.837 CXX test/cpp_headers/ioat.o 00:07:41.837 CXX test/cpp_headers/ioat_spec.o 00:07:41.837 CXX test/cpp_headers/iscsi_spec.o 00:07:41.837 CXX test/cpp_headers/json.o 00:07:41.837 CXX test/cpp_headers/jsonrpc.o 00:07:41.837 CXX test/cpp_headers/keyring.o 00:07:41.837 CXX test/cpp_headers/keyring_module.o 00:07:41.837 CXX test/cpp_headers/likely.o 00:07:42.093 CXX test/cpp_headers/log.o 00:07:42.093 CXX test/cpp_headers/lvol.o 00:07:42.093 CXX test/cpp_headers/memory.o 00:07:42.093 CXX test/cpp_headers/mmio.o 00:07:42.093 CXX test/cpp_headers/nbd.o 00:07:42.093 CXX test/cpp_headers/notify.o 00:07:42.093 CXX test/cpp_headers/nvme.o 00:07:42.093 CXX test/cpp_headers/nvme_intel.o 00:07:42.093 CXX test/cpp_headers/nvme_ocssd.o 00:07:42.093 CXX test/cpp_headers/nvme_ocssd_spec.o 00:07:42.093 CXX test/cpp_headers/nvme_spec.o 00:07:42.093 CXX test/cpp_headers/nvme_zns.o 00:07:42.093 CXX test/cpp_headers/nvmf_cmd.o 00:07:42.350 LINK cuse 00:07:42.350 CXX test/cpp_headers/nvmf_fc_spec.o 00:07:42.350 CXX test/cpp_headers/nvmf.o 00:07:42.350 CXX test/cpp_headers/nvmf_spec.o 00:07:42.350 CXX test/cpp_headers/nvmf_transport.o 00:07:42.350 CXX test/cpp_headers/opal.o 00:07:42.350 CXX test/cpp_headers/opal_spec.o 00:07:42.350 CXX test/cpp_headers/pci_ids.o 00:07:42.350 CXX test/cpp_headers/pipe.o 00:07:42.659 CXX test/cpp_headers/queue.o 00:07:42.659 CXX test/cpp_headers/reduce.o 00:07:42.659 CXX test/cpp_headers/rpc.o 00:07:42.659 CXX test/cpp_headers/scheduler.o 00:07:42.659 CXX test/cpp_headers/scsi.o 00:07:42.659 CXX test/cpp_headers/scsi_spec.o 00:07:42.659 CXX test/cpp_headers/sock.o 00:07:42.659 CXX test/cpp_headers/stdinc.o 00:07:42.659 CXX test/cpp_headers/string.o 00:07:42.659 CXX test/cpp_headers/thread.o 00:07:42.659 CXX test/cpp_headers/trace.o 00:07:42.659 CXX test/cpp_headers/trace_parser.o 00:07:42.659 CXX test/cpp_headers/tree.o 00:07:42.659 CXX test/cpp_headers/ublk.o 00:07:42.659 CXX test/cpp_headers/util.o 00:07:42.659 CXX test/cpp_headers/uuid.o 00:07:42.917 CXX test/cpp_headers/version.o 00:07:42.917 CXX test/cpp_headers/vfio_user_pci.o 00:07:42.917 CXX test/cpp_headers/vfio_user_spec.o 00:07:42.917 CXX test/cpp_headers/vhost.o 00:07:42.917 CXX test/cpp_headers/vmd.o 00:07:42.917 CXX test/cpp_headers/xor.o 00:07:42.917 CXX test/cpp_headers/zipf.o 00:07:44.299 LINK esnap 00:07:44.556 ************************************ 00:07:44.556 END TEST make 00:07:44.556 ************************************ 00:07:44.556 00:07:44.556 real 0m58.658s 00:07:44.556 user 5m0.979s 00:07:44.556 sys 1m23.172s 00:07:44.556 16:00:14 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:07:44.556 16:00:14 -- common/autotest_common.sh@10 -- $ set +x 00:07:44.556 16:00:14 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:07:44.556 16:00:14 -- pm/common@30 -- $ signal_monitor_resources TERM 00:07:44.556 16:00:14 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:07:44.556 16:00:14 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:44.556 16:00:14 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:07:44.556 16:00:14 -- pm/common@45 -- $ pid=5819 00:07:44.556 16:00:14 -- pm/common@52 -- $ sudo kill -TERM 5819 00:07:44.814 16:00:14 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:44.814 16:00:14 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:07:44.814 16:00:14 -- pm/common@45 -- $ pid=5818 00:07:44.814 16:00:14 -- pm/common@52 -- $ sudo kill -TERM 5818 00:07:44.814 16:00:14 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:44.814 16:00:14 -- nvmf/common.sh@7 -- # uname -s 00:07:44.814 16:00:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:44.814 16:00:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:44.814 16:00:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:44.814 16:00:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:44.814 16:00:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:44.814 16:00:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:44.814 16:00:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:44.814 16:00:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:44.814 16:00:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:44.814 16:00:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:44.814 16:00:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5cf30adc-e0ee-4255-9853-178d7d30983a 00:07:44.814 16:00:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=5cf30adc-e0ee-4255-9853-178d7d30983a 00:07:44.814 16:00:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:44.814 16:00:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:44.814 16:00:14 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:44.814 16:00:14 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:44.814 16:00:14 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:44.814 16:00:14 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:44.814 16:00:14 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:44.814 16:00:14 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:44.814 16:00:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.814 16:00:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.814 16:00:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.814 16:00:14 -- paths/export.sh@5 -- # export PATH 00:07:44.814 16:00:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.814 16:00:14 -- nvmf/common.sh@47 -- # : 0 00:07:44.814 16:00:14 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:44.814 16:00:14 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:44.814 16:00:14 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:44.814 16:00:14 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:44.814 16:00:14 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:44.814 16:00:14 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:44.814 16:00:14 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:44.814 16:00:14 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:44.814 16:00:14 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:07:44.814 16:00:14 -- spdk/autotest.sh@32 -- # uname -s 00:07:44.814 16:00:14 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:07:44.814 16:00:14 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:07:44.814 16:00:14 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:07:44.814 16:00:14 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:07:44.814 16:00:14 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:07:44.814 16:00:14 -- spdk/autotest.sh@44 -- # modprobe nbd 00:07:45.072 16:00:14 -- spdk/autotest.sh@46 -- # type -P udevadm 00:07:45.072 16:00:14 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:07:45.072 16:00:14 -- spdk/autotest.sh@48 -- # udevadm_pid=64743 00:07:45.072 16:00:14 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:07:45.072 16:00:14 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:07:45.072 16:00:14 -- pm/common@17 -- # local monitor 00:07:45.072 16:00:14 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:45.072 16:00:14 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=64745 00:07:45.072 16:00:14 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:45.072 16:00:14 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=64747 00:07:45.072 16:00:14 -- pm/common@26 -- # sleep 1 00:07:45.072 16:00:14 -- pm/common@21 -- # date +%s 00:07:45.072 16:00:14 -- pm/common@21 -- # date +%s 00:07:45.072 16:00:14 -- pm/common@21 -- # sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1713196814 00:07:45.072 16:00:14 -- pm/common@21 -- # sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1713196814 00:07:45.072 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1713196814_collect-vmstat.pm.log 00:07:45.072 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1713196814_collect-cpu-load.pm.log 00:07:46.007 16:00:15 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:07:46.007 16:00:15 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:07:46.007 16:00:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:46.007 16:00:15 -- common/autotest_common.sh@10 -- # set +x 00:07:46.007 16:00:15 -- spdk/autotest.sh@59 -- # create_test_list 00:07:46.007 16:00:15 -- common/autotest_common.sh@734 -- # xtrace_disable 00:07:46.007 16:00:15 -- common/autotest_common.sh@10 -- # set +x 00:07:46.007 16:00:15 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:07:46.007 16:00:15 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:07:46.007 16:00:15 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:07:46.007 16:00:15 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:07:46.007 16:00:15 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:07:46.007 16:00:15 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:07:46.007 16:00:15 -- common/autotest_common.sh@1441 -- # uname 00:07:46.007 16:00:15 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:07:46.007 16:00:15 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:07:46.007 16:00:15 -- common/autotest_common.sh@1461 -- # uname 00:07:46.007 16:00:15 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:07:46.007 16:00:15 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:07:46.007 16:00:15 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:07:46.007 16:00:15 -- spdk/autotest.sh@72 -- # hash lcov 00:07:46.007 16:00:15 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:07:46.007 16:00:15 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:07:46.007 --rc lcov_branch_coverage=1 00:07:46.007 --rc lcov_function_coverage=1 00:07:46.007 --rc genhtml_branch_coverage=1 00:07:46.007 --rc genhtml_function_coverage=1 00:07:46.007 --rc genhtml_legend=1 00:07:46.007 --rc geninfo_all_blocks=1 00:07:46.007 ' 00:07:46.007 16:00:15 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:07:46.007 --rc lcov_branch_coverage=1 00:07:46.007 --rc lcov_function_coverage=1 00:07:46.007 --rc genhtml_branch_coverage=1 00:07:46.007 --rc genhtml_function_coverage=1 00:07:46.007 --rc genhtml_legend=1 00:07:46.007 --rc geninfo_all_blocks=1 00:07:46.007 ' 00:07:46.007 16:00:15 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:07:46.007 --rc lcov_branch_coverage=1 00:07:46.007 --rc lcov_function_coverage=1 00:07:46.007 --rc genhtml_branch_coverage=1 00:07:46.007 --rc genhtml_function_coverage=1 00:07:46.007 --rc genhtml_legend=1 00:07:46.007 --rc geninfo_all_blocks=1 00:07:46.007 --no-external' 00:07:46.007 16:00:15 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:07:46.007 --rc lcov_branch_coverage=1 00:07:46.007 --rc lcov_function_coverage=1 00:07:46.007 --rc genhtml_branch_coverage=1 00:07:46.007 --rc genhtml_function_coverage=1 00:07:46.007 --rc genhtml_legend=1 00:07:46.007 --rc geninfo_all_blocks=1 00:07:46.007 --no-external' 00:07:46.007 16:00:15 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:07:46.265 lcov: LCOV version 1.14 00:07:46.265 16:00:15 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:07:56.236 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:07:56.236 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:07:56.236 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:07:56.236 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:07:56.236 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:07:56.236 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:08:04.480 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:08:04.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:08:19.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:08:19.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:08:19.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:08:19.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:08:19.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:08:19.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:08:19.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:08:19.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:08:19.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:08:19.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:08:19.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:08:19.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:08:19.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:08:19.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:08:19.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:08:19.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:08:19.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:08:19.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:08:19.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:08:19.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:08:19.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:08:19.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:08:19.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:08:19.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:08:19.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:08:19.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:08:19.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:08:19.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:08:19.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:08:19.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:08:19.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:08:19.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:08:19.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:08:19.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:08:19.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:08:19.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:08:19.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:08:19.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:08:19.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:08:19.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:08:19.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:08:19.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:08:19.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:08:19.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:08:19.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:08:19.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:08:19.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:08:19.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:08:19.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:08:19.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:08:19.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:08:19.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:08:19.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:08:19.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:08:19.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:08:19.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:08:19.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:08:19.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:08:19.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:08:19.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:08:19.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:08:19.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:08:19.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:08:19.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:08:19.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:08:19.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:08:19.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:08:19.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:08:19.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:08:19.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:08:19.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:08:19.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:08:19.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:08:19.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:08:19.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:08:19.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:08:19.467 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:08:19.467 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:08:19.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:08:19.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:08:19.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:08:19.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:08:19.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:08:19.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:08:19.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:08:19.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:08:19.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:08:19.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:08:19.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:08:19.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:08:19.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:08:19.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:08:19.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:08:19.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:08:19.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:08:19.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:08:19.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:08:19.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:08:19.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:08:19.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:08:19.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:08:19.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:08:19.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:08:19.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:08:19.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:08:19.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:08:19.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:08:19.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:08:19.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:08:19.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:08:19.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:08:19.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:08:19.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:08:19.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:08:19.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:08:19.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:08:19.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:08:19.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:08:19.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:08:19.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:08:19.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:08:19.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:08:19.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:08:19.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:08:19.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:08:19.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:08:19.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:08:19.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:08:19.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:08:19.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:08:19.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:08:19.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:08:19.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:08:19.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:08:19.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:08:19.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:08:19.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:08:19.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:08:19.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:08:19.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:08:19.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:08:19.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:08:19.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:08:19.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:08:19.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:08:19.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:08:19.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:08:19.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:08:19.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:08:19.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:08:19.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:08:19.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:08:19.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:08:19.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:08:19.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:08:19.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:08:19.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:08:19.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:08:19.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:08:19.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:08:19.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:08:19.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:08:19.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:08:19.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:08:19.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:08:19.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:08:19.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:08:19.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:08:19.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:08:19.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:08:19.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:08:19.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:08:19.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:08:19.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:08:19.468 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:08:19.468 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:08:23.656 16:00:53 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:08:23.656 16:00:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:23.656 16:00:53 -- common/autotest_common.sh@10 -- # set +x 00:08:23.656 16:00:53 -- spdk/autotest.sh@91 -- # rm -f 00:08:23.656 16:00:53 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:23.915 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:23.915 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:08:23.915 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:08:24.172 16:00:53 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:08:24.172 16:00:53 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:08:24.172 16:00:53 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:08:24.172 16:00:53 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:08:24.172 16:00:53 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:24.172 16:00:53 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:08:24.172 16:00:53 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:08:24.172 16:00:53 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:24.172 16:00:53 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:24.172 16:00:53 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:24.172 16:00:53 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:08:24.172 16:00:53 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:08:24.172 16:00:53 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:24.172 16:00:53 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:24.172 16:00:53 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:24.172 16:00:53 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:08:24.172 16:00:53 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:08:24.172 16:00:53 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:08:24.172 16:00:53 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:24.172 16:00:53 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:24.172 16:00:53 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:08:24.172 16:00:53 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:08:24.172 16:00:53 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:08:24.172 16:00:53 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:24.172 16:00:53 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:08:24.172 16:00:53 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:08:24.172 16:00:53 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:08:24.172 16:00:53 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:08:24.172 16:00:53 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:08:24.172 16:00:53 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:08:24.172 No valid GPT data, bailing 00:08:24.172 16:00:53 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:08:24.172 16:00:53 -- scripts/common.sh@391 -- # pt= 00:08:24.172 16:00:53 -- scripts/common.sh@392 -- # return 1 00:08:24.172 16:00:53 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:08:24.172 1+0 records in 00:08:24.172 1+0 records out 00:08:24.172 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00428751 s, 245 MB/s 00:08:24.172 16:00:53 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:08:24.172 16:00:53 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:08:24.172 16:00:53 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:08:24.172 16:00:53 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:08:24.172 16:00:53 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:08:24.172 No valid GPT data, bailing 00:08:24.172 16:00:54 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:08:24.172 16:00:54 -- scripts/common.sh@391 -- # pt= 00:08:24.172 16:00:54 -- scripts/common.sh@392 -- # return 1 00:08:24.172 16:00:54 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:08:24.172 1+0 records in 00:08:24.172 1+0 records out 00:08:24.172 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00372068 s, 282 MB/s 00:08:24.172 16:00:54 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:08:24.172 16:00:54 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:08:24.172 16:00:54 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:08:24.172 16:00:54 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:08:24.172 16:00:54 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:08:24.172 No valid GPT data, bailing 00:08:24.172 16:00:54 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:08:24.430 16:00:54 -- scripts/common.sh@391 -- # pt= 00:08:24.430 16:00:54 -- scripts/common.sh@392 -- # return 1 00:08:24.430 16:00:54 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:08:24.430 1+0 records in 00:08:24.430 1+0 records out 00:08:24.430 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00447371 s, 234 MB/s 00:08:24.430 16:00:54 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:08:24.430 16:00:54 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:08:24.430 16:00:54 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:08:24.430 16:00:54 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:08:24.430 16:00:54 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:08:24.430 No valid GPT data, bailing 00:08:24.430 16:00:54 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:08:24.430 16:00:54 -- scripts/common.sh@391 -- # pt= 00:08:24.430 16:00:54 -- scripts/common.sh@392 -- # return 1 00:08:24.430 16:00:54 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:08:24.430 1+0 records in 00:08:24.430 1+0 records out 00:08:24.430 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00379189 s, 277 MB/s 00:08:24.430 16:00:54 -- spdk/autotest.sh@118 -- # sync 00:08:24.430 16:00:54 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:08:24.430 16:00:54 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:08:24.430 16:00:54 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:08:26.336 16:00:55 -- spdk/autotest.sh@124 -- # uname -s 00:08:26.336 16:00:55 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:08:26.336 16:00:55 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:08:26.336 16:00:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:26.336 16:00:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:26.336 16:00:55 -- common/autotest_common.sh@10 -- # set +x 00:08:26.336 ************************************ 00:08:26.336 START TEST setup.sh 00:08:26.336 ************************************ 00:08:26.336 16:00:55 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:08:26.336 * Looking for test storage... 00:08:26.336 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:08:26.336 16:00:56 -- setup/test-setup.sh@10 -- # uname -s 00:08:26.336 16:00:56 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:08:26.336 16:00:56 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:08:26.336 16:00:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:26.336 16:00:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:26.336 16:00:56 -- common/autotest_common.sh@10 -- # set +x 00:08:26.336 ************************************ 00:08:26.336 START TEST acl 00:08:26.336 ************************************ 00:08:26.336 16:00:56 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:08:26.336 * Looking for test storage... 00:08:26.336 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:08:26.336 16:00:56 -- setup/acl.sh@10 -- # get_zoned_devs 00:08:26.336 16:00:56 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:08:26.336 16:00:56 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:08:26.336 16:00:56 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:08:26.336 16:00:56 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:26.336 16:00:56 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:08:26.336 16:00:56 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:08:26.336 16:00:56 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:26.336 16:00:56 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:26.336 16:00:56 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:26.336 16:00:56 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:08:26.336 16:00:56 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:08:26.336 16:00:56 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:26.336 16:00:56 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:26.336 16:00:56 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:26.336 16:00:56 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:08:26.336 16:00:56 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:08:26.336 16:00:56 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:08:26.336 16:00:56 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:26.336 16:00:56 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:26.336 16:00:56 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:08:26.336 16:00:56 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:08:26.336 16:00:56 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:08:26.336 16:00:56 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:26.336 16:00:56 -- setup/acl.sh@12 -- # devs=() 00:08:26.336 16:00:56 -- setup/acl.sh@12 -- # declare -a devs 00:08:26.336 16:00:56 -- setup/acl.sh@13 -- # drivers=() 00:08:26.336 16:00:56 -- setup/acl.sh@13 -- # declare -A drivers 00:08:26.336 16:00:56 -- setup/acl.sh@51 -- # setup reset 00:08:26.336 16:00:56 -- setup/common.sh@9 -- # [[ reset == output ]] 00:08:26.336 16:00:56 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:27.271 16:00:57 -- setup/acl.sh@52 -- # collect_setup_devs 00:08:27.271 16:00:57 -- setup/acl.sh@16 -- # local dev driver 00:08:27.271 16:00:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:27.271 16:00:57 -- setup/acl.sh@15 -- # setup output status 00:08:27.271 16:00:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:27.271 16:00:57 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:08:27.836 16:00:57 -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:08:27.836 16:00:57 -- setup/acl.sh@19 -- # continue 00:08:27.836 16:00:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:27.836 Hugepages 00:08:27.836 node hugesize free / total 00:08:27.836 16:00:57 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:08:27.836 16:00:57 -- setup/acl.sh@19 -- # continue 00:08:27.836 16:00:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:27.836 00:08:27.836 Type BDF Vendor Device NUMA Driver Device Block devices 00:08:27.836 16:00:57 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:08:27.836 16:00:57 -- setup/acl.sh@19 -- # continue 00:08:27.836 16:00:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:28.160 16:00:57 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:08:28.160 16:00:57 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:08:28.160 16:00:57 -- setup/acl.sh@20 -- # continue 00:08:28.160 16:00:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:28.160 16:00:57 -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:08:28.160 16:00:57 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:08:28.160 16:00:57 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:08:28.160 16:00:57 -- setup/acl.sh@22 -- # devs+=("$dev") 00:08:28.160 16:00:57 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:08:28.160 16:00:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:28.160 16:00:58 -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:08:28.160 16:00:58 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:08:28.160 16:00:58 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:08:28.160 16:00:58 -- setup/acl.sh@22 -- # devs+=("$dev") 00:08:28.160 16:00:58 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:08:28.160 16:00:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:28.160 16:00:58 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:08:28.160 16:00:58 -- setup/acl.sh@54 -- # run_test denied denied 00:08:28.160 16:00:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:28.160 16:00:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:28.160 16:00:58 -- common/autotest_common.sh@10 -- # set +x 00:08:28.432 ************************************ 00:08:28.433 START TEST denied 00:08:28.433 ************************************ 00:08:28.433 16:00:58 -- common/autotest_common.sh@1111 -- # denied 00:08:28.433 16:00:58 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:08:28.433 16:00:58 -- setup/acl.sh@38 -- # setup output config 00:08:28.433 16:00:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:28.433 16:00:58 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:08:28.433 16:00:58 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:08:29.365 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:08:29.365 16:00:59 -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:08:29.365 16:00:59 -- setup/acl.sh@28 -- # local dev driver 00:08:29.365 16:00:59 -- setup/acl.sh@30 -- # for dev in "$@" 00:08:29.365 16:00:59 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:08:29.365 16:00:59 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:08:29.365 16:00:59 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:08:29.365 16:00:59 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:08:29.365 16:00:59 -- setup/acl.sh@41 -- # setup reset 00:08:29.365 16:00:59 -- setup/common.sh@9 -- # [[ reset == output ]] 00:08:29.365 16:00:59 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:29.931 00:08:29.931 real 0m1.687s 00:08:29.931 user 0m0.658s 00:08:29.931 sys 0m0.972s 00:08:29.931 16:00:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:29.931 16:00:59 -- common/autotest_common.sh@10 -- # set +x 00:08:29.931 ************************************ 00:08:29.931 END TEST denied 00:08:29.931 ************************************ 00:08:29.931 16:00:59 -- setup/acl.sh@55 -- # run_test allowed allowed 00:08:29.931 16:00:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:29.931 16:00:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:29.931 16:00:59 -- common/autotest_common.sh@10 -- # set +x 00:08:30.189 ************************************ 00:08:30.189 START TEST allowed 00:08:30.189 ************************************ 00:08:30.189 16:00:59 -- common/autotest_common.sh@1111 -- # allowed 00:08:30.189 16:00:59 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:08:30.189 16:00:59 -- setup/acl.sh@45 -- # setup output config 00:08:30.189 16:00:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:30.189 16:00:59 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:08:30.189 16:00:59 -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:08:31.217 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:31.217 16:01:00 -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:08:31.217 16:01:00 -- setup/acl.sh@28 -- # local dev driver 00:08:31.217 16:01:00 -- setup/acl.sh@30 -- # for dev in "$@" 00:08:31.217 16:01:00 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:08:31.217 16:01:00 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:08:31.217 16:01:00 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:08:31.217 16:01:00 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:08:31.217 16:01:00 -- setup/acl.sh@48 -- # setup reset 00:08:31.217 16:01:00 -- setup/common.sh@9 -- # [[ reset == output ]] 00:08:31.217 16:01:00 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:31.783 00:08:31.783 real 0m1.762s 00:08:31.783 user 0m0.723s 00:08:31.783 sys 0m1.031s 00:08:31.783 16:01:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:31.783 16:01:01 -- common/autotest_common.sh@10 -- # set +x 00:08:31.783 ************************************ 00:08:31.783 END TEST allowed 00:08:31.783 ************************************ 00:08:31.783 00:08:31.783 real 0m5.634s 00:08:31.783 user 0m2.360s 00:08:31.783 sys 0m3.211s 00:08:31.783 16:01:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:31.783 16:01:01 -- common/autotest_common.sh@10 -- # set +x 00:08:31.783 ************************************ 00:08:31.783 END TEST acl 00:08:31.783 ************************************ 00:08:32.041 16:01:01 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:08:32.041 16:01:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:32.041 16:01:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:32.041 16:01:01 -- common/autotest_common.sh@10 -- # set +x 00:08:32.041 ************************************ 00:08:32.041 START TEST hugepages 00:08:32.041 ************************************ 00:08:32.041 16:01:01 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:08:32.041 * Looking for test storage... 00:08:32.041 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:08:32.041 16:01:01 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:08:32.041 16:01:01 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:08:32.041 16:01:01 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:08:32.041 16:01:01 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:08:32.041 16:01:01 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:08:32.041 16:01:01 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:08:32.041 16:01:01 -- setup/common.sh@17 -- # local get=Hugepagesize 00:08:32.041 16:01:01 -- setup/common.sh@18 -- # local node= 00:08:32.041 16:01:01 -- setup/common.sh@19 -- # local var val 00:08:32.041 16:01:01 -- setup/common.sh@20 -- # local mem_f mem 00:08:32.041 16:01:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:32.041 16:01:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:32.041 16:01:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:32.041 16:01:01 -- setup/common.sh@28 -- # mapfile -t mem 00:08:32.041 16:01:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:32.041 16:01:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 4918824 kB' 'MemAvailable: 7431476 kB' 'Buffers: 2436 kB' 'Cached: 2711096 kB' 'SwapCached: 0 kB' 'Active: 427784 kB' 'Inactive: 2393328 kB' 'Active(anon): 107384 kB' 'Inactive(anon): 10688 kB' 'Active(file): 320400 kB' 'Inactive(file): 2382640 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 664 kB' 'Writeback: 0 kB' 'AnonPages: 107800 kB' 'Mapped: 49268 kB' 'Shmem: 10492 kB' 'KReclaimable: 92804 kB' 'Slab: 170696 kB' 'SReclaimable: 92804 kB' 'SUnreclaim: 77892 kB' 'KernelStack: 4832 kB' 'PageTables: 3536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12407572 kB' 'Committed_AS: 339280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53344 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 8212480 kB' 'DirectMap1G: 6291456 kB' 00:08:32.041 16:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:32.041 16:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:32.041 16:01:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:32.041 16:01:01 -- setup/common.sh@32 -- # continue 00:08:32.041 16:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:32.041 16:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:32.041 16:01:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:32.041 16:01:01 -- setup/common.sh@32 -- # continue 00:08:32.041 16:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:32.041 16:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:32.041 16:01:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:32.041 16:01:01 -- setup/common.sh@32 -- # continue 00:08:32.041 16:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:32.041 16:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:32.041 16:01:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:32.041 16:01:01 -- setup/common.sh@32 -- # continue 00:08:32.041 16:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:32.041 16:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:32.041 16:01:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:32.041 16:01:01 -- setup/common.sh@32 -- # continue 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # continue 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # continue 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # continue 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # continue 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # continue 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # continue 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # continue 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # continue 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # continue 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # continue 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # continue 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # continue 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # continue 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # continue 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # continue 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # continue 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # continue 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # continue 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # continue 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # continue 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # continue 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # continue 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # continue 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # continue 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # continue 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # continue 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # continue 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # continue 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # continue 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # continue 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # continue 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # continue 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # continue 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # continue 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # continue 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # continue 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # continue 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # continue 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # continue 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # continue 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # continue 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:32.042 16:01:01 -- setup/common.sh@32 -- # continue 00:08:32.042 16:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:32.043 16:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:32.043 16:01:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:32.043 16:01:01 -- setup/common.sh@32 -- # continue 00:08:32.043 16:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:32.043 16:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:32.043 16:01:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:32.043 16:01:01 -- setup/common.sh@32 -- # continue 00:08:32.043 16:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:32.043 16:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:32.043 16:01:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:32.043 16:01:01 -- setup/common.sh@32 -- # continue 00:08:32.043 16:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:32.043 16:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:32.043 16:01:01 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:32.043 16:01:01 -- setup/common.sh@32 -- # continue 00:08:32.043 16:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:32.043 16:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:32.043 16:01:01 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:32.043 16:01:01 -- setup/common.sh@32 -- # continue 00:08:32.043 16:01:01 -- setup/common.sh@31 -- # IFS=': ' 00:08:32.043 16:01:01 -- setup/common.sh@31 -- # read -r var val _ 00:08:32.043 16:01:01 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:32.043 16:01:01 -- setup/common.sh@33 -- # echo 2048 00:08:32.043 16:01:01 -- setup/common.sh@33 -- # return 0 00:08:32.301 16:01:01 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:08:32.301 16:01:01 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:08:32.301 16:01:01 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:08:32.301 16:01:01 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:08:32.301 16:01:01 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:08:32.301 16:01:01 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:08:32.301 16:01:01 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:08:32.301 16:01:02 -- setup/hugepages.sh@207 -- # get_nodes 00:08:32.301 16:01:02 -- setup/hugepages.sh@27 -- # local node 00:08:32.301 16:01:02 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:32.301 16:01:02 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:08:32.301 16:01:02 -- setup/hugepages.sh@32 -- # no_nodes=1 00:08:32.301 16:01:02 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:08:32.301 16:01:02 -- setup/hugepages.sh@208 -- # clear_hp 00:08:32.301 16:01:02 -- setup/hugepages.sh@37 -- # local node hp 00:08:32.301 16:01:02 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:08:32.301 16:01:02 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:08:32.301 16:01:02 -- setup/hugepages.sh@41 -- # echo 0 00:08:32.301 16:01:02 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:08:32.301 16:01:02 -- setup/hugepages.sh@41 -- # echo 0 00:08:32.301 16:01:02 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:08:32.301 16:01:02 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:08:32.301 16:01:02 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:08:32.301 16:01:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:32.301 16:01:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:32.301 16:01:02 -- common/autotest_common.sh@10 -- # set +x 00:08:32.301 ************************************ 00:08:32.301 START TEST default_setup 00:08:32.301 ************************************ 00:08:32.301 16:01:02 -- common/autotest_common.sh@1111 -- # default_setup 00:08:32.301 16:01:02 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:08:32.301 16:01:02 -- setup/hugepages.sh@49 -- # local size=2097152 00:08:32.301 16:01:02 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:08:32.301 16:01:02 -- setup/hugepages.sh@51 -- # shift 00:08:32.301 16:01:02 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:08:32.301 16:01:02 -- setup/hugepages.sh@52 -- # local node_ids 00:08:32.301 16:01:02 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:08:32.301 16:01:02 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:08:32.301 16:01:02 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:08:32.301 16:01:02 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:08:32.301 16:01:02 -- setup/hugepages.sh@62 -- # local user_nodes 00:08:32.301 16:01:02 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:08:32.301 16:01:02 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:08:32.301 16:01:02 -- setup/hugepages.sh@67 -- # nodes_test=() 00:08:32.301 16:01:02 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:08:32.301 16:01:02 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:08:32.301 16:01:02 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:08:32.301 16:01:02 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:08:32.301 16:01:02 -- setup/hugepages.sh@73 -- # return 0 00:08:32.301 16:01:02 -- setup/hugepages.sh@137 -- # setup output 00:08:32.301 16:01:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:32.301 16:01:02 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:33.240 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:33.240 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:33.240 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:33.240 16:01:03 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:08:33.240 16:01:03 -- setup/hugepages.sh@89 -- # local node 00:08:33.240 16:01:03 -- setup/hugepages.sh@90 -- # local sorted_t 00:08:33.240 16:01:03 -- setup/hugepages.sh@91 -- # local sorted_s 00:08:33.240 16:01:03 -- setup/hugepages.sh@92 -- # local surp 00:08:33.240 16:01:03 -- setup/hugepages.sh@93 -- # local resv 00:08:33.240 16:01:03 -- setup/hugepages.sh@94 -- # local anon 00:08:33.240 16:01:03 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:08:33.240 16:01:03 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:08:33.240 16:01:03 -- setup/common.sh@17 -- # local get=AnonHugePages 00:08:33.240 16:01:03 -- setup/common.sh@18 -- # local node= 00:08:33.240 16:01:03 -- setup/common.sh@19 -- # local var val 00:08:33.240 16:01:03 -- setup/common.sh@20 -- # local mem_f mem 00:08:33.240 16:01:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:33.240 16:01:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:33.240 16:01:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:33.240 16:01:03 -- setup/common.sh@28 -- # mapfile -t mem 00:08:33.240 16:01:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:33.240 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.240 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.240 16:01:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 7011340 kB' 'MemAvailable: 9523888 kB' 'Buffers: 2436 kB' 'Cached: 2711116 kB' 'SwapCached: 0 kB' 'Active: 444296 kB' 'Inactive: 2393348 kB' 'Active(anon): 123896 kB' 'Inactive(anon): 10672 kB' 'Active(file): 320400 kB' 'Inactive(file): 2382676 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 840 kB' 'Writeback: 0 kB' 'AnonPages: 124252 kB' 'Mapped: 49184 kB' 'Shmem: 10468 kB' 'KReclaimable: 92528 kB' 'Slab: 170492 kB' 'SReclaimable: 92528 kB' 'SUnreclaim: 77964 kB' 'KernelStack: 4800 kB' 'PageTables: 3612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 355520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53360 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 8212480 kB' 'DirectMap1G: 6291456 kB' 00:08:33.240 16:01:03 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.240 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.240 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.240 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.240 16:01:03 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.240 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.240 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.240 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.240 16:01:03 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.240 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.240 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.240 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.240 16:01:03 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.240 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.240 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.240 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.240 16:01:03 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.240 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.240 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.241 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.241 16:01:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.241 16:01:03 -- setup/common.sh@33 -- # echo 0 00:08:33.241 16:01:03 -- setup/common.sh@33 -- # return 0 00:08:33.241 16:01:03 -- setup/hugepages.sh@97 -- # anon=0 00:08:33.241 16:01:03 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:08:33.241 16:01:03 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:33.241 16:01:03 -- setup/common.sh@18 -- # local node= 00:08:33.241 16:01:03 -- setup/common.sh@19 -- # local var val 00:08:33.241 16:01:03 -- setup/common.sh@20 -- # local mem_f mem 00:08:33.241 16:01:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:33.241 16:01:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:33.241 16:01:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:33.241 16:01:03 -- setup/common.sh@28 -- # mapfile -t mem 00:08:33.241 16:01:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:33.242 16:01:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 7010836 kB' 'MemAvailable: 9523388 kB' 'Buffers: 2436 kB' 'Cached: 2711116 kB' 'SwapCached: 0 kB' 'Active: 444164 kB' 'Inactive: 2393344 kB' 'Active(anon): 123764 kB' 'Inactive(anon): 10664 kB' 'Active(file): 320400 kB' 'Inactive(file): 2382680 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 840 kB' 'Writeback: 0 kB' 'AnonPages: 123776 kB' 'Mapped: 48996 kB' 'Shmem: 10468 kB' 'KReclaimable: 92528 kB' 'Slab: 170484 kB' 'SReclaimable: 92528 kB' 'SUnreclaim: 77956 kB' 'KernelStack: 4736 kB' 'PageTables: 3452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 355520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53328 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 8212480 kB' 'DirectMap1G: 6291456 kB' 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.242 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.242 16:01:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.243 16:01:03 -- setup/common.sh@33 -- # echo 0 00:08:33.243 16:01:03 -- setup/common.sh@33 -- # return 0 00:08:33.243 16:01:03 -- setup/hugepages.sh@99 -- # surp=0 00:08:33.243 16:01:03 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:08:33.243 16:01:03 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:08:33.243 16:01:03 -- setup/common.sh@18 -- # local node= 00:08:33.243 16:01:03 -- setup/common.sh@19 -- # local var val 00:08:33.243 16:01:03 -- setup/common.sh@20 -- # local mem_f mem 00:08:33.243 16:01:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:33.243 16:01:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:33.243 16:01:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:33.243 16:01:03 -- setup/common.sh@28 -- # mapfile -t mem 00:08:33.243 16:01:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.243 16:01:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 7010836 kB' 'MemAvailable: 9523388 kB' 'Buffers: 2436 kB' 'Cached: 2711116 kB' 'SwapCached: 0 kB' 'Active: 444028 kB' 'Inactive: 2393344 kB' 'Active(anon): 123628 kB' 'Inactive(anon): 10664 kB' 'Active(file): 320400 kB' 'Inactive(file): 2382680 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 840 kB' 'Writeback: 0 kB' 'AnonPages: 123644 kB' 'Mapped: 48996 kB' 'Shmem: 10468 kB' 'KReclaimable: 92528 kB' 'Slab: 170484 kB' 'SReclaimable: 92528 kB' 'SUnreclaim: 77956 kB' 'KernelStack: 4720 kB' 'PageTables: 3420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 355520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53312 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 8212480 kB' 'DirectMap1G: 6291456 kB' 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.243 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:33.243 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.244 16:01:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:33.244 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.244 16:01:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:33.244 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.244 16:01:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:33.244 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.244 16:01:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:33.244 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.244 16:01:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:33.244 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.244 16:01:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:33.244 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.244 16:01:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:33.244 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.244 16:01:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:33.244 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.244 16:01:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:33.244 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.244 16:01:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:33.244 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.244 16:01:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:33.244 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.244 16:01:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:33.244 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.244 16:01:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:33.244 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.244 16:01:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:33.244 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.244 16:01:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:33.244 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.244 16:01:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:33.244 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.244 16:01:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:33.244 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.244 16:01:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:33.244 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.244 16:01:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:33.244 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.244 16:01:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:33.244 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.244 16:01:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:33.244 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.244 16:01:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:33.244 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.244 16:01:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:33.244 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.244 16:01:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:33.244 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.244 16:01:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:33.244 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.244 16:01:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:33.244 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.244 16:01:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:33.244 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.244 16:01:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:33.244 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.244 16:01:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:33.244 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.244 16:01:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:33.244 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.244 16:01:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:33.244 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.244 16:01:03 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:33.244 16:01:03 -- setup/common.sh@33 -- # echo 0 00:08:33.244 16:01:03 -- setup/common.sh@33 -- # return 0 00:08:33.244 16:01:03 -- setup/hugepages.sh@100 -- # resv=0 00:08:33.244 16:01:03 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:08:33.244 nr_hugepages=1024 00:08:33.244 resv_hugepages=0 00:08:33.244 16:01:03 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:08:33.244 surplus_hugepages=0 00:08:33.244 16:01:03 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:08:33.244 anon_hugepages=0 00:08:33.244 16:01:03 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:08:33.244 16:01:03 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:33.244 16:01:03 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:08:33.244 16:01:03 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:08:33.244 16:01:03 -- setup/common.sh@17 -- # local get=HugePages_Total 00:08:33.244 16:01:03 -- setup/common.sh@18 -- # local node= 00:08:33.244 16:01:03 -- setup/common.sh@19 -- # local var val 00:08:33.244 16:01:03 -- setup/common.sh@20 -- # local mem_f mem 00:08:33.244 16:01:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:33.244 16:01:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:33.244 16:01:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:33.244 16:01:03 -- setup/common.sh@28 -- # mapfile -t mem 00:08:33.244 16:01:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.244 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.245 16:01:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 7010836 kB' 'MemAvailable: 9523388 kB' 'Buffers: 2436 kB' 'Cached: 2711116 kB' 'SwapCached: 0 kB' 'Active: 444240 kB' 'Inactive: 2393344 kB' 'Active(anon): 123840 kB' 'Inactive(anon): 10664 kB' 'Active(file): 320400 kB' 'Inactive(file): 2382680 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 840 kB' 'Writeback: 0 kB' 'AnonPages: 123856 kB' 'Mapped: 48996 kB' 'Shmem: 10468 kB' 'KReclaimable: 92528 kB' 'Slab: 170484 kB' 'SReclaimable: 92528 kB' 'SUnreclaim: 77956 kB' 'KernelStack: 4704 kB' 'PageTables: 3388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 355520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53312 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 8212480 kB' 'DirectMap1G: 6291456 kB' 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.245 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.245 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.245 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.245 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.245 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.245 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.245 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.245 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.245 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.245 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.245 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.245 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.245 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.245 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.245 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.245 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.245 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.245 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.245 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.245 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.245 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.245 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.245 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.245 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.245 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.245 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.245 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.245 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.245 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.245 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.245 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.245 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.245 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.245 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.245 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.245 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.245 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.245 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.245 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.245 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.245 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.245 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.245 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.245 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.245 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.245 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.245 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.245 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.245 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.245 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.245 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.245 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.245 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.245 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.245 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.245 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.245 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.245 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.245 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.245 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.245 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.245 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:33.245 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:33.246 16:01:03 -- setup/common.sh@33 -- # echo 1024 00:08:33.246 16:01:03 -- setup/common.sh@33 -- # return 0 00:08:33.246 16:01:03 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:33.246 16:01:03 -- setup/hugepages.sh@112 -- # get_nodes 00:08:33.246 16:01:03 -- setup/hugepages.sh@27 -- # local node 00:08:33.246 16:01:03 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:33.246 16:01:03 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:08:33.246 16:01:03 -- setup/hugepages.sh@32 -- # no_nodes=1 00:08:33.246 16:01:03 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:08:33.246 16:01:03 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:08:33.246 16:01:03 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:08:33.246 16:01:03 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:08:33.246 16:01:03 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:33.246 16:01:03 -- setup/common.sh@18 -- # local node=0 00:08:33.246 16:01:03 -- setup/common.sh@19 -- # local var val 00:08:33.246 16:01:03 -- setup/common.sh@20 -- # local mem_f mem 00:08:33.246 16:01:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:33.246 16:01:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:08:33.246 16:01:03 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:08:33.246 16:01:03 -- setup/common.sh@28 -- # mapfile -t mem 00:08:33.246 16:01:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.246 16:01:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 7010836 kB' 'MemUsed: 5221408 kB' 'SwapCached: 0 kB' 'Active: 444224 kB' 'Inactive: 2393348 kB' 'Active(anon): 123824 kB' 'Inactive(anon): 10664 kB' 'Active(file): 320400 kB' 'Inactive(file): 2382684 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 840 kB' 'Writeback: 0 kB' 'FilePages: 2713552 kB' 'Mapped: 48996 kB' 'AnonPages: 123816 kB' 'Shmem: 10468 kB' 'KernelStack: 4720 kB' 'PageTables: 3420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 92528 kB' 'Slab: 170484 kB' 'SReclaimable: 92528 kB' 'SUnreclaim: 77956 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.246 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.246 16:01:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.247 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.247 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.247 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.247 16:01:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.247 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.247 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.247 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.247 16:01:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.247 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.247 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.247 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.247 16:01:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.247 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.247 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.247 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.247 16:01:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.247 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.247 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.247 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.247 16:01:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.247 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.247 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.247 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.247 16:01:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.247 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.247 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.247 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.247 16:01:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.247 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.247 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.247 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.247 16:01:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.247 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.247 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.247 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.247 16:01:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.247 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.247 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.247 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.247 16:01:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.247 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.247 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.247 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.247 16:01:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.247 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.247 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.247 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.247 16:01:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.247 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.247 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.247 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.247 16:01:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.247 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.247 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.247 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.247 16:01:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.247 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.247 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.247 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.247 16:01:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.247 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.247 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.247 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.247 16:01:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.247 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.247 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.247 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.247 16:01:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.247 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.247 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.247 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.247 16:01:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.247 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.247 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.247 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.247 16:01:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.247 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.247 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.247 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.247 16:01:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.247 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.247 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.247 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.247 16:01:03 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.247 16:01:03 -- setup/common.sh@33 -- # echo 0 00:08:33.247 16:01:03 -- setup/common.sh@33 -- # return 0 00:08:33.247 16:01:03 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:08:33.247 16:01:03 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:08:33.247 16:01:03 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:08:33.247 16:01:03 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:08:33.247 16:01:03 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:08:33.247 node0=1024 expecting 1024 00:08:33.247 16:01:03 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:08:33.247 00:08:33.247 real 0m1.078s 00:08:33.247 user 0m0.466s 00:08:33.247 sys 0m0.570s 00:08:33.247 16:01:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:33.247 16:01:03 -- common/autotest_common.sh@10 -- # set +x 00:08:33.247 ************************************ 00:08:33.247 END TEST default_setup 00:08:33.247 ************************************ 00:08:33.505 16:01:03 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:08:33.505 16:01:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:33.505 16:01:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:33.505 16:01:03 -- common/autotest_common.sh@10 -- # set +x 00:08:33.505 ************************************ 00:08:33.505 START TEST per_node_1G_alloc 00:08:33.505 ************************************ 00:08:33.505 16:01:03 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:08:33.505 16:01:03 -- setup/hugepages.sh@143 -- # local IFS=, 00:08:33.505 16:01:03 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:08:33.505 16:01:03 -- setup/hugepages.sh@49 -- # local size=1048576 00:08:33.505 16:01:03 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:08:33.505 16:01:03 -- setup/hugepages.sh@51 -- # shift 00:08:33.505 16:01:03 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:08:33.505 16:01:03 -- setup/hugepages.sh@52 -- # local node_ids 00:08:33.505 16:01:03 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:08:33.505 16:01:03 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:08:33.505 16:01:03 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:08:33.505 16:01:03 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:08:33.505 16:01:03 -- setup/hugepages.sh@62 -- # local user_nodes 00:08:33.505 16:01:03 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:08:33.505 16:01:03 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:08:33.505 16:01:03 -- setup/hugepages.sh@67 -- # nodes_test=() 00:08:33.505 16:01:03 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:08:33.505 16:01:03 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:08:33.505 16:01:03 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:08:33.505 16:01:03 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:08:33.505 16:01:03 -- setup/hugepages.sh@73 -- # return 0 00:08:33.505 16:01:03 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:08:33.505 16:01:03 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:08:33.505 16:01:03 -- setup/hugepages.sh@146 -- # setup output 00:08:33.505 16:01:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:33.505 16:01:03 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:33.764 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:33.764 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:33.764 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:33.764 16:01:03 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:08:33.764 16:01:03 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:08:33.764 16:01:03 -- setup/hugepages.sh@89 -- # local node 00:08:33.764 16:01:03 -- setup/hugepages.sh@90 -- # local sorted_t 00:08:33.764 16:01:03 -- setup/hugepages.sh@91 -- # local sorted_s 00:08:33.764 16:01:03 -- setup/hugepages.sh@92 -- # local surp 00:08:33.764 16:01:03 -- setup/hugepages.sh@93 -- # local resv 00:08:33.764 16:01:03 -- setup/hugepages.sh@94 -- # local anon 00:08:33.764 16:01:03 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:08:33.764 16:01:03 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:08:33.764 16:01:03 -- setup/common.sh@17 -- # local get=AnonHugePages 00:08:33.764 16:01:03 -- setup/common.sh@18 -- # local node= 00:08:33.764 16:01:03 -- setup/common.sh@19 -- # local var val 00:08:33.764 16:01:03 -- setup/common.sh@20 -- # local mem_f mem 00:08:33.764 16:01:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:33.764 16:01:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:33.764 16:01:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:33.764 16:01:03 -- setup/common.sh@28 -- # mapfile -t mem 00:08:33.764 16:01:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:33.764 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.764 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.765 16:01:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 8058744 kB' 'MemAvailable: 10571284 kB' 'Buffers: 2436 kB' 'Cached: 2711116 kB' 'SwapCached: 0 kB' 'Active: 444308 kB' 'Inactive: 2393348 kB' 'Active(anon): 123908 kB' 'Inactive(anon): 10664 kB' 'Active(file): 320400 kB' 'Inactive(file): 2382684 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 840 kB' 'Writeback: 0 kB' 'AnonPages: 124356 kB' 'Mapped: 49284 kB' 'Shmem: 10468 kB' 'KReclaimable: 92492 kB' 'Slab: 170464 kB' 'SReclaimable: 92492 kB' 'SUnreclaim: 77972 kB' 'KernelStack: 4764 kB' 'PageTables: 3112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13980436 kB' 'Committed_AS: 355520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53392 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 8212480 kB' 'DirectMap1G: 6291456 kB' 00:08:33.765 16:01:03 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.765 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.765 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.765 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.765 16:01:03 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.765 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.765 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.765 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.765 16:01:03 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.765 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.765 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.765 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.765 16:01:03 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.765 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.765 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.765 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.765 16:01:03 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.765 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.765 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.765 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.765 16:01:03 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.765 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.765 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.765 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.765 16:01:03 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.765 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.765 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.765 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.765 16:01:03 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.765 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.765 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.765 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.765 16:01:03 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.765 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.765 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.765 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.765 16:01:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.765 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.765 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.765 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.765 16:01:03 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.765 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.765 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.765 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.765 16:01:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.765 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.765 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.765 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.765 16:01:03 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.765 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.765 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.765 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.765 16:01:03 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.765 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.765 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.765 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.765 16:01:03 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.765 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.765 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.765 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.765 16:01:03 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.765 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.765 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.765 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.765 16:01:03 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.765 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.765 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.765 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.765 16:01:03 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.765 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.765 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.765 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.765 16:01:03 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.765 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.765 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.765 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.765 16:01:03 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.765 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.765 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.765 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.765 16:01:03 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.765 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.765 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.765 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.765 16:01:03 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.765 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.765 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.765 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.765 16:01:03 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.765 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.765 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.765 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.765 16:01:03 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.765 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.765 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.765 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.765 16:01:03 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.765 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.765 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.765 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.765 16:01:03 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.765 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.765 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.765 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.765 16:01:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.765 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.765 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.765 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.765 16:01:03 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.765 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.765 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.765 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.765 16:01:03 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.765 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.765 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.765 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.765 16:01:03 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.765 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.765 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.765 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.765 16:01:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.765 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.765 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.766 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.766 16:01:03 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.766 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.766 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.766 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.766 16:01:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.766 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.766 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.766 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.766 16:01:03 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.766 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.766 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.766 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.766 16:01:03 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.766 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.766 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.766 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.766 16:01:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.766 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.766 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.766 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.766 16:01:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.766 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.766 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.766 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.766 16:01:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.766 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.766 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.766 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.766 16:01:03 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.766 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.766 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.766 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.766 16:01:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.766 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.766 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.766 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.766 16:01:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:33.766 16:01:03 -- setup/common.sh@33 -- # echo 0 00:08:33.766 16:01:03 -- setup/common.sh@33 -- # return 0 00:08:33.766 16:01:03 -- setup/hugepages.sh@97 -- # anon=0 00:08:33.766 16:01:03 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:08:33.766 16:01:03 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:33.766 16:01:03 -- setup/common.sh@18 -- # local node= 00:08:33.766 16:01:03 -- setup/common.sh@19 -- # local var val 00:08:33.766 16:01:03 -- setup/common.sh@20 -- # local mem_f mem 00:08:33.766 16:01:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:33.766 16:01:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:33.766 16:01:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:33.766 16:01:03 -- setup/common.sh@28 -- # mapfile -t mem 00:08:33.766 16:01:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:33.766 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.766 16:01:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 8058240 kB' 'MemAvailable: 10570780 kB' 'Buffers: 2436 kB' 'Cached: 2711116 kB' 'SwapCached: 0 kB' 'Active: 444504 kB' 'Inactive: 2393348 kB' 'Active(anon): 124104 kB' 'Inactive(anon): 10664 kB' 'Active(file): 320400 kB' 'Inactive(file): 2382684 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 840 kB' 'Writeback: 0 kB' 'AnonPages: 124536 kB' 'Mapped: 49284 kB' 'Shmem: 10468 kB' 'KReclaimable: 92492 kB' 'Slab: 170460 kB' 'SReclaimable: 92492 kB' 'SUnreclaim: 77968 kB' 'KernelStack: 4764 kB' 'PageTables: 3112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13980436 kB' 'Committed_AS: 355520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53360 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 8212480 kB' 'DirectMap1G: 6291456 kB' 00:08:33.766 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.766 16:01:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.766 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.766 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.766 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.766 16:01:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.766 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.766 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.766 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.766 16:01:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.766 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.766 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.766 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.766 16:01:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.766 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.766 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.766 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.766 16:01:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.766 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.766 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.766 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.766 16:01:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.766 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.766 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.766 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.766 16:01:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.766 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.766 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.766 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.766 16:01:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.766 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.766 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.766 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:33.766 16:01:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:33.766 16:01:03 -- setup/common.sh@32 -- # continue 00:08:33.766 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:33.766 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.027 16:01:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.027 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.027 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.027 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.028 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.028 16:01:03 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.029 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.029 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.029 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.029 16:01:03 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.029 16:01:03 -- setup/common.sh@33 -- # echo 0 00:08:34.029 16:01:03 -- setup/common.sh@33 -- # return 0 00:08:34.029 16:01:03 -- setup/hugepages.sh@99 -- # surp=0 00:08:34.029 16:01:03 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:08:34.029 16:01:03 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:08:34.029 16:01:03 -- setup/common.sh@18 -- # local node= 00:08:34.029 16:01:03 -- setup/common.sh@19 -- # local var val 00:08:34.029 16:01:03 -- setup/common.sh@20 -- # local mem_f mem 00:08:34.029 16:01:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:34.029 16:01:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:34.029 16:01:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:34.029 16:01:03 -- setup/common.sh@28 -- # mapfile -t mem 00:08:34.029 16:01:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:34.029 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.029 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.029 16:01:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 8057988 kB' 'MemAvailable: 10570528 kB' 'Buffers: 2436 kB' 'Cached: 2711116 kB' 'SwapCached: 0 kB' 'Active: 444448 kB' 'Inactive: 2393348 kB' 'Active(anon): 124048 kB' 'Inactive(anon): 10664 kB' 'Active(file): 320400 kB' 'Inactive(file): 2382684 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 840 kB' 'Writeback: 0 kB' 'AnonPages: 124484 kB' 'Mapped: 49284 kB' 'Shmem: 10468 kB' 'KReclaimable: 92492 kB' 'Slab: 170460 kB' 'SReclaimable: 92492 kB' 'SUnreclaim: 77968 kB' 'KernelStack: 4732 kB' 'PageTables: 3044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13980436 kB' 'Committed_AS: 355520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53360 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 8212480 kB' 'DirectMap1G: 6291456 kB' 00:08:34.029 16:01:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.029 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.029 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.029 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.029 16:01:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.029 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.029 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.029 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.029 16:01:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.029 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.029 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.029 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.029 16:01:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.029 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.029 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.029 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.029 16:01:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.029 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.029 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.029 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.029 16:01:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.029 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.029 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.029 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.029 16:01:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.029 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.029 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.029 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.029 16:01:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.029 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.029 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.029 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.029 16:01:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.029 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.029 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.029 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.029 16:01:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.029 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.029 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.029 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.029 16:01:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.029 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.029 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.029 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.029 16:01:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.029 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.029 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.029 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.029 16:01:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.029 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.029 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.029 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.029 16:01:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.029 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.029 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.029 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.029 16:01:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.029 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.029 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.029 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.029 16:01:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.029 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.029 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.029 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.029 16:01:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.029 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.029 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.029 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.029 16:01:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.029 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.029 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.029 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.029 16:01:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.029 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.029 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.029 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.029 16:01:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.029 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.029 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.029 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.029 16:01:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.029 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.029 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.029 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.029 16:01:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.029 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.029 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.029 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.029 16:01:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.029 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.029 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.029 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.029 16:01:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.029 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.029 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.029 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.029 16:01:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.029 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.029 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.029 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.029 16:01:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.029 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.029 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.029 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.029 16:01:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.029 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.029 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.029 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.029 16:01:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.029 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.029 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.029 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.029 16:01:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.029 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.029 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.029 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.029 16:01:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.030 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.030 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.030 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.030 16:01:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.030 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.030 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.030 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.030 16:01:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.030 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.030 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.030 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.030 16:01:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.030 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.030 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.030 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.030 16:01:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.030 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.030 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.030 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.030 16:01:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.030 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.030 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.030 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.030 16:01:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.030 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.030 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.030 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.030 16:01:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.030 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.030 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.030 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.030 16:01:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.030 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.030 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.030 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.030 16:01:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.030 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.030 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.030 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.030 16:01:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.030 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.030 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.030 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.030 16:01:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.030 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.030 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.030 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.030 16:01:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.030 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.030 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.030 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.030 16:01:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.030 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.030 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.030 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.030 16:01:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.030 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.030 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.030 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.030 16:01:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.030 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.030 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.030 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.030 16:01:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.030 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.030 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.030 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.030 16:01:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.030 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.030 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.030 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.030 16:01:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.030 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.030 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.030 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.030 16:01:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.030 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.030 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.030 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.030 16:01:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.030 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.030 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.030 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.030 16:01:03 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.030 16:01:03 -- setup/common.sh@33 -- # echo 0 00:08:34.030 16:01:03 -- setup/common.sh@33 -- # return 0 00:08:34.030 16:01:03 -- setup/hugepages.sh@100 -- # resv=0 00:08:34.030 nr_hugepages=512 00:08:34.030 16:01:03 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:08:34.030 resv_hugepages=0 00:08:34.030 16:01:03 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:08:34.030 surplus_hugepages=0 00:08:34.030 16:01:03 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:08:34.030 anon_hugepages=0 00:08:34.030 16:01:03 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:08:34.030 16:01:03 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:08:34.030 16:01:03 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:08:34.030 16:01:03 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:08:34.030 16:01:03 -- setup/common.sh@17 -- # local get=HugePages_Total 00:08:34.030 16:01:03 -- setup/common.sh@18 -- # local node= 00:08:34.030 16:01:03 -- setup/common.sh@19 -- # local var val 00:08:34.030 16:01:03 -- setup/common.sh@20 -- # local mem_f mem 00:08:34.030 16:01:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:34.030 16:01:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:34.030 16:01:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:34.030 16:01:03 -- setup/common.sh@28 -- # mapfile -t mem 00:08:34.030 16:01:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:34.030 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.030 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.030 16:01:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 8057988 kB' 'MemAvailable: 10570528 kB' 'Buffers: 2436 kB' 'Cached: 2711116 kB' 'SwapCached: 0 kB' 'Active: 444516 kB' 'Inactive: 2393348 kB' 'Active(anon): 124116 kB' 'Inactive(anon): 10664 kB' 'Active(file): 320400 kB' 'Inactive(file): 2382684 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 840 kB' 'Writeback: 0 kB' 'AnonPages: 124548 kB' 'Mapped: 49284 kB' 'Shmem: 10468 kB' 'KReclaimable: 92492 kB' 'Slab: 170460 kB' 'SReclaimable: 92492 kB' 'SUnreclaim: 77968 kB' 'KernelStack: 4764 kB' 'PageTables: 3108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13980436 kB' 'Committed_AS: 355520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53376 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 8212480 kB' 'DirectMap1G: 6291456 kB' 00:08:34.030 16:01:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.030 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.030 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.030 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.030 16:01:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.030 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.030 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.030 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.030 16:01:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.030 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.030 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.030 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.030 16:01:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.030 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.030 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.030 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.030 16:01:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.030 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.030 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.030 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.030 16:01:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.030 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.030 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.030 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.030 16:01:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.030 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.031 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.031 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.032 16:01:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.032 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.032 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.032 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.032 16:01:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.032 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.032 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.032 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.032 16:01:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.032 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.032 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.032 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.032 16:01:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.032 16:01:03 -- setup/common.sh@33 -- # echo 512 00:08:34.032 16:01:03 -- setup/common.sh@33 -- # return 0 00:08:34.032 16:01:03 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:08:34.032 16:01:03 -- setup/hugepages.sh@112 -- # get_nodes 00:08:34.032 16:01:03 -- setup/hugepages.sh@27 -- # local node 00:08:34.032 16:01:03 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:34.032 16:01:03 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:08:34.032 16:01:03 -- setup/hugepages.sh@32 -- # no_nodes=1 00:08:34.032 16:01:03 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:08:34.032 16:01:03 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:08:34.032 16:01:03 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:08:34.032 16:01:03 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:08:34.032 16:01:03 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:34.032 16:01:03 -- setup/common.sh@18 -- # local node=0 00:08:34.032 16:01:03 -- setup/common.sh@19 -- # local var val 00:08:34.032 16:01:03 -- setup/common.sh@20 -- # local mem_f mem 00:08:34.032 16:01:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:34.032 16:01:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:08:34.032 16:01:03 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:08:34.032 16:01:03 -- setup/common.sh@28 -- # mapfile -t mem 00:08:34.032 16:01:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:34.032 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.032 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.032 16:01:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 8057736 kB' 'MemUsed: 4174508 kB' 'SwapCached: 0 kB' 'Active: 444416 kB' 'Inactive: 2393340 kB' 'Active(anon): 124016 kB' 'Inactive(anon): 10656 kB' 'Active(file): 320400 kB' 'Inactive(file): 2382684 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 840 kB' 'Writeback: 0 kB' 'FilePages: 2713552 kB' 'Mapped: 49172 kB' 'AnonPages: 124440 kB' 'Shmem: 10468 kB' 'KernelStack: 4760 kB' 'PageTables: 3240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 92492 kB' 'Slab: 170456 kB' 'SReclaimable: 92492 kB' 'SUnreclaim: 77964 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:08:34.032 16:01:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.032 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.032 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.032 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.032 16:01:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.032 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.032 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.032 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.032 16:01:03 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.032 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.032 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.032 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.032 16:01:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.032 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.032 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.032 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.032 16:01:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.032 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.032 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.032 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.032 16:01:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.032 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.032 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.032 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.032 16:01:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.032 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.032 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.032 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.032 16:01:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.032 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.032 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.032 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.032 16:01:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.032 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.032 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.032 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.032 16:01:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.032 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.032 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.032 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.032 16:01:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.032 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.032 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.032 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.032 16:01:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.032 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.032 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.032 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.032 16:01:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.032 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.032 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.032 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.032 16:01:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.032 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.032 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.032 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.032 16:01:03 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.032 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.032 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.032 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.032 16:01:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.032 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.032 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.032 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.032 16:01:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.032 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.032 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.032 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.032 16:01:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.032 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.032 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.032 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.032 16:01:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.032 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.032 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.032 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.032 16:01:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.032 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.032 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.032 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.033 16:01:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.033 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.033 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.033 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.033 16:01:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.033 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.033 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.033 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.033 16:01:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.033 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.033 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.033 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.033 16:01:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.033 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.033 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.033 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.033 16:01:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.033 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.033 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.033 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.033 16:01:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.033 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.033 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.033 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.033 16:01:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.033 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.033 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.033 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.033 16:01:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.033 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.033 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.033 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.033 16:01:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.033 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.033 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.033 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.033 16:01:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.033 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.033 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.033 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.033 16:01:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.033 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.033 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.033 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.033 16:01:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.033 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.033 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.033 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.033 16:01:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.033 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.033 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.033 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.033 16:01:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.033 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.033 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.033 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.033 16:01:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.033 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.033 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.033 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.033 16:01:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.033 16:01:03 -- setup/common.sh@32 -- # continue 00:08:34.033 16:01:03 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.033 16:01:03 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.033 16:01:03 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.033 16:01:03 -- setup/common.sh@33 -- # echo 0 00:08:34.033 16:01:03 -- setup/common.sh@33 -- # return 0 00:08:34.033 16:01:03 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:08:34.033 16:01:03 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:08:34.033 16:01:03 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:08:34.033 16:01:03 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:08:34.033 node0=512 expecting 512 00:08:34.033 16:01:03 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:08:34.033 16:01:03 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:08:34.033 00:08:34.033 real 0m0.552s 00:08:34.033 user 0m0.262s 00:08:34.033 sys 0m0.327s 00:08:34.033 16:01:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:34.033 16:01:03 -- common/autotest_common.sh@10 -- # set +x 00:08:34.033 ************************************ 00:08:34.033 END TEST per_node_1G_alloc 00:08:34.033 ************************************ 00:08:34.033 16:01:03 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:08:34.033 16:01:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:34.033 16:01:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:34.033 16:01:03 -- common/autotest_common.sh@10 -- # set +x 00:08:34.033 ************************************ 00:08:34.033 START TEST even_2G_alloc 00:08:34.033 ************************************ 00:08:34.033 16:01:03 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:08:34.033 16:01:03 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:08:34.033 16:01:03 -- setup/hugepages.sh@49 -- # local size=2097152 00:08:34.033 16:01:03 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:08:34.033 16:01:03 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:08:34.033 16:01:03 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:08:34.033 16:01:03 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:08:34.033 16:01:03 -- setup/hugepages.sh@62 -- # user_nodes=() 00:08:34.033 16:01:03 -- setup/hugepages.sh@62 -- # local user_nodes 00:08:34.033 16:01:03 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:08:34.033 16:01:03 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:08:34.033 16:01:03 -- setup/hugepages.sh@67 -- # nodes_test=() 00:08:34.033 16:01:03 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:08:34.033 16:01:03 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:08:34.033 16:01:03 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:08:34.033 16:01:03 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:08:34.033 16:01:03 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:08:34.033 16:01:03 -- setup/hugepages.sh@83 -- # : 0 00:08:34.033 16:01:03 -- setup/hugepages.sh@84 -- # : 0 00:08:34.033 16:01:03 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:08:34.033 16:01:03 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:08:34.033 16:01:03 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:08:34.033 16:01:03 -- setup/hugepages.sh@153 -- # setup output 00:08:34.033 16:01:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:34.033 16:01:03 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:34.605 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:34.605 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:34.605 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:34.605 16:01:04 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:08:34.605 16:01:04 -- setup/hugepages.sh@89 -- # local node 00:08:34.605 16:01:04 -- setup/hugepages.sh@90 -- # local sorted_t 00:08:34.605 16:01:04 -- setup/hugepages.sh@91 -- # local sorted_s 00:08:34.605 16:01:04 -- setup/hugepages.sh@92 -- # local surp 00:08:34.605 16:01:04 -- setup/hugepages.sh@93 -- # local resv 00:08:34.605 16:01:04 -- setup/hugepages.sh@94 -- # local anon 00:08:34.605 16:01:04 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:08:34.605 16:01:04 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:08:34.605 16:01:04 -- setup/common.sh@17 -- # local get=AnonHugePages 00:08:34.605 16:01:04 -- setup/common.sh@18 -- # local node= 00:08:34.605 16:01:04 -- setup/common.sh@19 -- # local var val 00:08:34.605 16:01:04 -- setup/common.sh@20 -- # local mem_f mem 00:08:34.605 16:01:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:34.605 16:01:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:34.605 16:01:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:34.605 16:01:04 -- setup/common.sh@28 -- # mapfile -t mem 00:08:34.605 16:01:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:34.605 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.606 16:01:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 7005168 kB' 'MemAvailable: 9517712 kB' 'Buffers: 2436 kB' 'Cached: 2711120 kB' 'SwapCached: 0 kB' 'Active: 444484 kB' 'Inactive: 2393360 kB' 'Active(anon): 124084 kB' 'Inactive(anon): 10672 kB' 'Active(file): 320400 kB' 'Inactive(file): 2382688 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1168 kB' 'Writeback: 0 kB' 'AnonPages: 124304 kB' 'Mapped: 49148 kB' 'Shmem: 10468 kB' 'KReclaimable: 92492 kB' 'Slab: 170556 kB' 'SReclaimable: 92492 kB' 'SUnreclaim: 78064 kB' 'KernelStack: 4932 kB' 'PageTables: 3688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 355212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53376 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 8212480 kB' 'DirectMap1G: 6291456 kB' 00:08:34.606 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.606 16:01:04 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.606 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.606 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.606 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.606 16:01:04 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.606 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.606 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.606 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.606 16:01:04 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.606 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.606 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.606 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.606 16:01:04 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.606 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.606 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.606 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.606 16:01:04 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.606 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.606 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.606 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.606 16:01:04 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.606 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.606 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.606 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.606 16:01:04 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.606 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.606 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.606 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.606 16:01:04 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.606 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.606 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.606 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.606 16:01:04 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.606 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.606 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.606 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.606 16:01:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.606 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.606 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.606 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.606 16:01:04 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.606 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.606 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.606 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.606 16:01:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.606 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.606 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.606 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.606 16:01:04 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.606 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.606 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.606 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.606 16:01:04 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.606 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.606 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.606 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.606 16:01:04 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.606 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.606 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.606 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.606 16:01:04 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.606 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.606 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.606 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.607 16:01:04 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.607 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.607 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.607 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.607 16:01:04 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.607 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.607 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.607 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.607 16:01:04 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.607 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.607 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.607 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.607 16:01:04 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.607 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.607 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.607 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.607 16:01:04 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.607 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.607 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.607 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.607 16:01:04 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.607 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.607 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.607 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.607 16:01:04 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.607 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.607 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.607 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.607 16:01:04 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.607 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.607 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.607 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.607 16:01:04 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.607 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.607 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.607 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.607 16:01:04 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.607 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.607 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.607 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.607 16:01:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.607 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.607 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.607 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.607 16:01:04 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.607 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.607 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.607 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.607 16:01:04 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.607 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.607 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.607 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.607 16:01:04 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.607 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.607 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.607 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.607 16:01:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.607 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.607 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.607 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.607 16:01:04 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.607 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.607 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.607 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.607 16:01:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.607 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.607 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.607 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.607 16:01:04 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.607 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.607 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.607 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.607 16:01:04 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.607 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.607 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.607 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.607 16:01:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.607 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.607 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.607 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.608 16:01:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.608 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.608 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.608 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.608 16:01:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.608 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.608 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.608 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.608 16:01:04 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.608 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.608 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.608 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.608 16:01:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.608 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.608 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.608 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.608 16:01:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:34.608 16:01:04 -- setup/common.sh@33 -- # echo 0 00:08:34.608 16:01:04 -- setup/common.sh@33 -- # return 0 00:08:34.608 16:01:04 -- setup/hugepages.sh@97 -- # anon=0 00:08:34.608 16:01:04 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:08:34.608 16:01:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:34.608 16:01:04 -- setup/common.sh@18 -- # local node= 00:08:34.608 16:01:04 -- setup/common.sh@19 -- # local var val 00:08:34.608 16:01:04 -- setup/common.sh@20 -- # local mem_f mem 00:08:34.608 16:01:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:34.608 16:01:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:34.608 16:01:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:34.608 16:01:04 -- setup/common.sh@28 -- # mapfile -t mem 00:08:34.608 16:01:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:34.608 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.608 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.608 16:01:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 7004916 kB' 'MemAvailable: 9517460 kB' 'Buffers: 2436 kB' 'Cached: 2711120 kB' 'SwapCached: 0 kB' 'Active: 444348 kB' 'Inactive: 2393352 kB' 'Active(anon): 123948 kB' 'Inactive(anon): 10664 kB' 'Active(file): 320400 kB' 'Inactive(file): 2382688 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1168 kB' 'Writeback: 0 kB' 'AnonPages: 124500 kB' 'Mapped: 48956 kB' 'Shmem: 10468 kB' 'KReclaimable: 92492 kB' 'Slab: 170548 kB' 'SReclaimable: 92492 kB' 'SUnreclaim: 78056 kB' 'KernelStack: 4800 kB' 'PageTables: 3560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 362136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53360 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 8212480 kB' 'DirectMap1G: 6291456 kB' 00:08:34.608 16:01:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.608 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.608 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.608 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.608 16:01:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.608 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.608 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.608 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.608 16:01:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.608 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.608 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.608 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.608 16:01:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.608 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.608 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.608 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.608 16:01:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.608 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.608 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.608 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.608 16:01:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.608 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.608 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.608 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.608 16:01:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.608 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.608 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.608 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.608 16:01:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.608 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.608 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.608 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.608 16:01:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.608 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.609 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.609 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.609 16:01:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.609 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.609 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.609 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.609 16:01:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.609 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.609 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.609 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.609 16:01:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.609 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.609 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.609 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.609 16:01:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.609 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.609 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.609 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.609 16:01:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.609 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.609 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.609 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.609 16:01:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.609 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.609 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.609 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.609 16:01:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.609 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.609 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.609 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.609 16:01:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.609 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.609 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.609 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.609 16:01:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.609 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.609 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.609 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.609 16:01:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.609 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.609 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.609 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.609 16:01:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.609 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.609 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.609 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.609 16:01:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.609 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.609 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.609 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.609 16:01:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.609 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.609 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.609 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.609 16:01:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.609 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.609 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.609 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.609 16:01:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.609 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.609 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.609 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.609 16:01:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.609 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.609 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.609 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.609 16:01:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.609 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.609 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.609 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.609 16:01:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.609 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.609 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.609 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.609 16:01:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.609 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.609 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.609 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.609 16:01:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.609 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.609 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.609 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.609 16:01:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.609 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.609 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.609 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.609 16:01:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.609 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.609 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.610 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.610 16:01:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.610 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.610 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.610 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.610 16:01:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.610 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.610 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.610 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.610 16:01:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.610 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.610 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.610 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.610 16:01:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.610 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.610 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.610 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.610 16:01:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.610 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.610 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.610 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.610 16:01:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.610 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.610 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.610 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.610 16:01:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.610 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.610 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.610 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.610 16:01:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.610 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.610 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.610 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.610 16:01:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.610 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.610 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.610 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.610 16:01:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.610 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.610 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.610 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.610 16:01:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.610 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.610 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.610 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.610 16:01:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.610 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.610 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.610 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.610 16:01:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.610 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.610 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.610 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.610 16:01:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.610 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.610 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.610 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.610 16:01:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.610 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.610 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.610 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.610 16:01:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.610 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.610 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.610 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.610 16:01:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.610 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.610 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.610 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.610 16:01:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.610 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.610 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.610 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.610 16:01:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.610 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.610 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.610 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.610 16:01:04 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.610 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.610 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.611 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.611 16:01:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.611 16:01:04 -- setup/common.sh@33 -- # echo 0 00:08:34.611 16:01:04 -- setup/common.sh@33 -- # return 0 00:08:34.611 16:01:04 -- setup/hugepages.sh@99 -- # surp=0 00:08:34.611 16:01:04 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:08:34.611 16:01:04 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:08:34.611 16:01:04 -- setup/common.sh@18 -- # local node= 00:08:34.611 16:01:04 -- setup/common.sh@19 -- # local var val 00:08:34.611 16:01:04 -- setup/common.sh@20 -- # local mem_f mem 00:08:34.611 16:01:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:34.611 16:01:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:34.611 16:01:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:34.611 16:01:04 -- setup/common.sh@28 -- # mapfile -t mem 00:08:34.611 16:01:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:34.611 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.611 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.611 16:01:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 7004916 kB' 'MemAvailable: 9517460 kB' 'Buffers: 2436 kB' 'Cached: 2711120 kB' 'SwapCached: 0 kB' 'Active: 443940 kB' 'Inactive: 2393348 kB' 'Active(anon): 123540 kB' 'Inactive(anon): 10660 kB' 'Active(file): 320400 kB' 'Inactive(file): 2382688 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1168 kB' 'Writeback: 0 kB' 'AnonPages: 123736 kB' 'Mapped: 49076 kB' 'Shmem: 10468 kB' 'KReclaimable: 92492 kB' 'Slab: 170540 kB' 'SReclaimable: 92492 kB' 'SUnreclaim: 78048 kB' 'KernelStack: 4736 kB' 'PageTables: 3428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 355212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53344 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 8212480 kB' 'DirectMap1G: 6291456 kB' 00:08:34.611 16:01:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.611 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.611 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.611 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.611 16:01:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.611 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.611 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.611 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.611 16:01:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.611 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.611 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.611 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.611 16:01:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.611 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.611 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.611 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.611 16:01:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.611 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.611 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.611 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.611 16:01:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.611 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.611 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.611 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.611 16:01:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.611 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.611 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.611 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.611 16:01:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.611 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.611 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.611 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.611 16:01:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.611 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.611 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.611 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.611 16:01:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.611 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.611 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.611 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.611 16:01:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.611 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.611 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.611 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.611 16:01:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.611 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.611 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.611 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.611 16:01:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.611 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.611 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.611 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.611 16:01:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.611 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.611 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.611 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.611 16:01:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.611 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.611 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.611 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.612 16:01:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.612 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.612 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.612 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.612 16:01:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.612 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.612 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.612 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.612 16:01:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.612 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.612 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.612 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.612 16:01:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.612 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.612 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.612 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.612 16:01:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.612 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.612 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.612 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.612 16:01:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.612 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.612 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.612 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.612 16:01:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.612 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.612 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.612 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.612 16:01:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.612 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.612 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.612 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.612 16:01:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.612 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.612 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.612 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.612 16:01:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.612 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.612 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.612 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.612 16:01:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.612 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.612 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.612 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.612 16:01:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.612 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.612 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.612 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.612 16:01:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.612 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.612 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.612 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.612 16:01:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.612 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.612 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.612 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.612 16:01:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.612 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.612 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.612 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.612 16:01:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.612 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.612 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.612 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.612 16:01:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.612 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.612 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.612 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.612 16:01:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.612 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.612 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.612 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.612 16:01:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.612 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.612 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.612 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.612 16:01:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.612 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.612 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.612 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.612 16:01:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.612 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.612 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.613 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.613 16:01:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.613 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.613 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.613 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.613 16:01:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.613 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.613 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.613 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.613 16:01:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.613 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.613 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.613 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.613 16:01:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.613 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.613 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.613 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.613 16:01:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.613 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.613 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.613 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.613 16:01:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.613 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.613 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.613 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.613 16:01:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.613 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.613 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.613 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.613 16:01:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.613 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.613 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.613 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.613 16:01:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.613 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.613 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.613 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.613 16:01:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.613 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.613 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.613 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.613 16:01:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.613 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.613 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.613 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.613 16:01:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.613 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.613 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.613 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.613 16:01:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.613 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.613 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.613 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.613 16:01:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.613 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.613 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.613 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.613 16:01:04 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:34.613 16:01:04 -- setup/common.sh@33 -- # echo 0 00:08:34.613 16:01:04 -- setup/common.sh@33 -- # return 0 00:08:34.613 16:01:04 -- setup/hugepages.sh@100 -- # resv=0 00:08:34.613 16:01:04 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:08:34.613 nr_hugepages=1024 00:08:34.613 resv_hugepages=0 00:08:34.613 16:01:04 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:08:34.613 surplus_hugepages=0 00:08:34.613 16:01:04 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:08:34.613 anon_hugepages=0 00:08:34.613 16:01:04 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:08:34.613 16:01:04 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:34.613 16:01:04 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:08:34.613 16:01:04 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:08:34.613 16:01:04 -- setup/common.sh@17 -- # local get=HugePages_Total 00:08:34.613 16:01:04 -- setup/common.sh@18 -- # local node= 00:08:34.613 16:01:04 -- setup/common.sh@19 -- # local var val 00:08:34.613 16:01:04 -- setup/common.sh@20 -- # local mem_f mem 00:08:34.613 16:01:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:34.613 16:01:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:34.613 16:01:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:34.613 16:01:04 -- setup/common.sh@28 -- # mapfile -t mem 00:08:34.613 16:01:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:34.613 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.614 16:01:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 7004916 kB' 'MemAvailable: 9517460 kB' 'Buffers: 2436 kB' 'Cached: 2711120 kB' 'SwapCached: 0 kB' 'Active: 443712 kB' 'Inactive: 2393352 kB' 'Active(anon): 123312 kB' 'Inactive(anon): 10664 kB' 'Active(file): 320400 kB' 'Inactive(file): 2382688 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1168 kB' 'Writeback: 0 kB' 'AnonPages: 123848 kB' 'Mapped: 49076 kB' 'Shmem: 10468 kB' 'KReclaimable: 92492 kB' 'Slab: 170536 kB' 'SReclaimable: 92492 kB' 'SUnreclaim: 78044 kB' 'KernelStack: 4720 kB' 'PageTables: 3404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 355212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53360 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 8212480 kB' 'DirectMap1G: 6291456 kB' 00:08:34.614 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.614 16:01:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.614 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.614 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.614 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.614 16:01:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.614 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.614 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.614 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.614 16:01:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.614 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.614 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.614 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.614 16:01:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.614 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.614 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.614 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.614 16:01:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.614 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.614 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.614 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.614 16:01:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.614 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.614 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.614 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.614 16:01:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.614 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.614 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.614 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.614 16:01:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.614 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.614 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.614 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.614 16:01:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.614 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.614 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.615 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.615 16:01:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.615 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.615 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.615 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.615 16:01:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.615 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.615 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.615 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.615 16:01:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.615 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.615 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.615 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.615 16:01:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.615 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.615 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.615 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.615 16:01:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.615 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.615 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.615 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.615 16:01:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.615 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.615 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.615 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.615 16:01:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.615 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.615 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.615 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.615 16:01:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.615 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.615 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.615 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.615 16:01:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.615 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.615 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.615 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.615 16:01:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.615 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.615 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.615 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.615 16:01:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.615 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.615 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.615 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.615 16:01:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.615 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.615 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.615 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.615 16:01:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.615 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.615 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.615 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.615 16:01:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.615 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.615 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.615 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.615 16:01:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.615 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.615 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.615 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.615 16:01:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.615 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.615 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.615 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.615 16:01:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.615 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.615 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.615 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.615 16:01:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.615 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.615 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.615 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.615 16:01:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.615 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.615 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.615 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.615 16:01:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.615 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.615 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.615 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.615 16:01:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.616 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.616 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.616 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.616 16:01:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.616 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.616 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.616 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.616 16:01:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.616 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.616 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.616 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.616 16:01:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.616 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.616 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.616 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.616 16:01:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.616 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.616 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.616 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.616 16:01:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.616 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.616 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.616 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.616 16:01:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.616 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.616 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.616 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.616 16:01:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.616 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.616 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.616 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.616 16:01:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.616 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.616 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.616 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.616 16:01:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.616 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.616 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.616 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.616 16:01:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.616 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.616 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.616 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.616 16:01:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.616 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.616 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.616 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.616 16:01:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.616 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.616 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.616 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.616 16:01:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.616 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.616 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.616 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.616 16:01:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.616 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.616 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.616 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.616 16:01:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.616 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.616 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.616 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.616 16:01:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.616 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.616 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.616 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.616 16:01:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.616 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.616 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.616 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.616 16:01:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.616 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.616 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.616 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.616 16:01:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:34.616 16:01:04 -- setup/common.sh@33 -- # echo 1024 00:08:34.616 16:01:04 -- setup/common.sh@33 -- # return 0 00:08:34.616 16:01:04 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:34.616 16:01:04 -- setup/hugepages.sh@112 -- # get_nodes 00:08:34.616 16:01:04 -- setup/hugepages.sh@27 -- # local node 00:08:34.616 16:01:04 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:34.616 16:01:04 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:08:34.616 16:01:04 -- setup/hugepages.sh@32 -- # no_nodes=1 00:08:34.616 16:01:04 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:08:34.616 16:01:04 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:08:34.616 16:01:04 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:08:34.616 16:01:04 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:08:34.616 16:01:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:34.616 16:01:04 -- setup/common.sh@18 -- # local node=0 00:08:34.616 16:01:04 -- setup/common.sh@19 -- # local var val 00:08:34.616 16:01:04 -- setup/common.sh@20 -- # local mem_f mem 00:08:34.616 16:01:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:34.616 16:01:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:08:34.616 16:01:04 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:08:34.616 16:01:04 -- setup/common.sh@28 -- # mapfile -t mem 00:08:34.617 16:01:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:34.617 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.617 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.617 16:01:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 7004916 kB' 'MemUsed: 5227328 kB' 'SwapCached: 0 kB' 'Active: 443788 kB' 'Inactive: 2393352 kB' 'Active(anon): 123388 kB' 'Inactive(anon): 10664 kB' 'Active(file): 320400 kB' 'Inactive(file): 2382688 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 1168 kB' 'Writeback: 0 kB' 'FilePages: 2713556 kB' 'Mapped: 49076 kB' 'AnonPages: 123928 kB' 'Shmem: 10468 kB' 'KernelStack: 4704 kB' 'PageTables: 3372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 92492 kB' 'Slab: 170536 kB' 'SReclaimable: 92492 kB' 'SUnreclaim: 78044 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:08:34.617 16:01:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.617 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.617 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.617 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.617 16:01:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.617 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.617 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.617 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.617 16:01:04 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.617 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.617 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.617 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.617 16:01:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.617 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.617 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.617 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.617 16:01:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.617 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.617 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.617 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.617 16:01:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.617 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.617 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.617 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.617 16:01:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.617 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.617 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.617 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.617 16:01:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.617 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.617 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.617 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.617 16:01:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.617 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.878 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.878 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.878 16:01:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.878 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.878 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.878 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.878 16:01:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.878 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.878 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.878 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.878 16:01:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.878 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.878 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.878 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.878 16:01:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.878 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.878 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.878 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.878 16:01:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.878 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.878 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.878 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.878 16:01:04 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.878 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.878 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.878 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.878 16:01:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.878 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.878 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.878 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.878 16:01:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.878 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.878 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.878 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.878 16:01:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.878 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.878 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.878 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.878 16:01:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.878 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.878 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.878 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.878 16:01:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.878 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.878 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.878 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.878 16:01:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.878 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.878 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.878 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.878 16:01:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.878 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.878 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.878 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.878 16:01:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.878 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.878 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.878 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.878 16:01:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.878 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.878 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.878 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.878 16:01:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.878 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.878 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.878 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.878 16:01:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.878 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.878 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.878 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.878 16:01:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.878 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.878 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.878 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.878 16:01:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.878 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.878 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.878 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.878 16:01:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.878 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.878 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.878 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.878 16:01:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.878 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.878 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.878 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.878 16:01:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.878 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.878 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.878 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.878 16:01:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.878 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.878 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.878 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.878 16:01:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.878 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.878 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.878 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.878 16:01:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.878 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.878 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.878 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.878 16:01:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.878 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.878 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.878 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.878 16:01:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.878 16:01:04 -- setup/common.sh@32 -- # continue 00:08:34.878 16:01:04 -- setup/common.sh@31 -- # IFS=': ' 00:08:34.878 16:01:04 -- setup/common.sh@31 -- # read -r var val _ 00:08:34.879 16:01:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:34.879 16:01:04 -- setup/common.sh@33 -- # echo 0 00:08:34.879 16:01:04 -- setup/common.sh@33 -- # return 0 00:08:34.879 16:01:04 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:08:34.879 16:01:04 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:08:34.879 16:01:04 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:08:34.879 16:01:04 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:08:34.879 node0=1024 expecting 1024 00:08:34.879 16:01:04 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:08:34.879 16:01:04 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:08:34.879 00:08:34.879 real 0m0.604s 00:08:34.879 user 0m0.265s 00:08:34.879 sys 0m0.336s 00:08:34.879 16:01:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:34.879 16:01:04 -- common/autotest_common.sh@10 -- # set +x 00:08:34.879 ************************************ 00:08:34.879 END TEST even_2G_alloc 00:08:34.879 ************************************ 00:08:34.879 16:01:04 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:08:34.879 16:01:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:34.879 16:01:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:34.879 16:01:04 -- common/autotest_common.sh@10 -- # set +x 00:08:34.879 ************************************ 00:08:34.879 START TEST odd_alloc 00:08:34.879 ************************************ 00:08:34.879 16:01:04 -- common/autotest_common.sh@1111 -- # odd_alloc 00:08:34.879 16:01:04 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:08:34.879 16:01:04 -- setup/hugepages.sh@49 -- # local size=2098176 00:08:34.879 16:01:04 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:08:34.879 16:01:04 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:08:34.879 16:01:04 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:08:34.879 16:01:04 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:08:34.879 16:01:04 -- setup/hugepages.sh@62 -- # user_nodes=() 00:08:34.879 16:01:04 -- setup/hugepages.sh@62 -- # local user_nodes 00:08:34.879 16:01:04 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:08:34.879 16:01:04 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:08:34.879 16:01:04 -- setup/hugepages.sh@67 -- # nodes_test=() 00:08:34.879 16:01:04 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:08:34.879 16:01:04 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:08:34.879 16:01:04 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:08:34.879 16:01:04 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:08:34.879 16:01:04 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:08:34.879 16:01:04 -- setup/hugepages.sh@83 -- # : 0 00:08:34.879 16:01:04 -- setup/hugepages.sh@84 -- # : 0 00:08:34.879 16:01:04 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:08:34.879 16:01:04 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:08:34.879 16:01:04 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:08:34.879 16:01:04 -- setup/hugepages.sh@160 -- # setup output 00:08:34.879 16:01:04 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:34.879 16:01:04 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:35.137 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:35.137 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:35.137 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:35.396 16:01:05 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:08:35.396 16:01:05 -- setup/hugepages.sh@89 -- # local node 00:08:35.396 16:01:05 -- setup/hugepages.sh@90 -- # local sorted_t 00:08:35.396 16:01:05 -- setup/hugepages.sh@91 -- # local sorted_s 00:08:35.396 16:01:05 -- setup/hugepages.sh@92 -- # local surp 00:08:35.396 16:01:05 -- setup/hugepages.sh@93 -- # local resv 00:08:35.396 16:01:05 -- setup/hugepages.sh@94 -- # local anon 00:08:35.396 16:01:05 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:08:35.396 16:01:05 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:08:35.396 16:01:05 -- setup/common.sh@17 -- # local get=AnonHugePages 00:08:35.396 16:01:05 -- setup/common.sh@18 -- # local node= 00:08:35.396 16:01:05 -- setup/common.sh@19 -- # local var val 00:08:35.396 16:01:05 -- setup/common.sh@20 -- # local mem_f mem 00:08:35.396 16:01:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:35.396 16:01:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:35.396 16:01:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:35.396 16:01:05 -- setup/common.sh@28 -- # mapfile -t mem 00:08:35.396 16:01:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:35.396 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.396 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.396 16:01:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 7014092 kB' 'MemAvailable: 9526640 kB' 'Buffers: 2436 kB' 'Cached: 2711124 kB' 'SwapCached: 0 kB' 'Active: 444344 kB' 'Inactive: 2393368 kB' 'Active(anon): 123944 kB' 'Inactive(anon): 10676 kB' 'Active(file): 320400 kB' 'Inactive(file): 2382692 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1324 kB' 'Writeback: 0 kB' 'AnonPages: 124000 kB' 'Mapped: 49264 kB' 'Shmem: 10468 kB' 'KReclaimable: 92492 kB' 'Slab: 170620 kB' 'SReclaimable: 92492 kB' 'SUnreclaim: 78128 kB' 'KernelStack: 4708 kB' 'PageTables: 3488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13455124 kB' 'Committed_AS: 355044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53376 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 8212480 kB' 'DirectMap1G: 6291456 kB' 00:08:35.396 16:01:05 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.396 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.396 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.396 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.396 16:01:05 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.396 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.396 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.396 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.396 16:01:05 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.396 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.396 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.396 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.396 16:01:05 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.396 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.396 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.396 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.396 16:01:05 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.396 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.396 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.396 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.396 16:01:05 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.396 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.396 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.396 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.396 16:01:05 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.396 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.396 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.396 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.396 16:01:05 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.396 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.396 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.396 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.396 16:01:05 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.396 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.396 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.396 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.396 16:01:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.396 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.396 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.396 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.396 16:01:05 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.396 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.396 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.396 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.397 16:01:05 -- setup/common.sh@33 -- # echo 0 00:08:35.397 16:01:05 -- setup/common.sh@33 -- # return 0 00:08:35.397 16:01:05 -- setup/hugepages.sh@97 -- # anon=0 00:08:35.397 16:01:05 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:08:35.397 16:01:05 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:35.397 16:01:05 -- setup/common.sh@18 -- # local node= 00:08:35.397 16:01:05 -- setup/common.sh@19 -- # local var val 00:08:35.397 16:01:05 -- setup/common.sh@20 -- # local mem_f mem 00:08:35.397 16:01:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:35.397 16:01:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:35.397 16:01:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:35.397 16:01:05 -- setup/common.sh@28 -- # mapfile -t mem 00:08:35.397 16:01:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.397 16:01:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 7014092 kB' 'MemAvailable: 9526640 kB' 'Buffers: 2436 kB' 'Cached: 2711124 kB' 'SwapCached: 0 kB' 'Active: 444012 kB' 'Inactive: 2393368 kB' 'Active(anon): 123612 kB' 'Inactive(anon): 10676 kB' 'Active(file): 320400 kB' 'Inactive(file): 2382692 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1324 kB' 'Writeback: 0 kB' 'AnonPages: 123668 kB' 'Mapped: 49264 kB' 'Shmem: 10468 kB' 'KReclaimable: 92492 kB' 'Slab: 170612 kB' 'SReclaimable: 92492 kB' 'SUnreclaim: 78120 kB' 'KernelStack: 4676 kB' 'PageTables: 3420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13455124 kB' 'Committed_AS: 355044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53360 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 8212480 kB' 'DirectMap1G: 6291456 kB' 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.397 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.397 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.398 16:01:05 -- setup/common.sh@33 -- # echo 0 00:08:35.398 16:01:05 -- setup/common.sh@33 -- # return 0 00:08:35.398 16:01:05 -- setup/hugepages.sh@99 -- # surp=0 00:08:35.398 16:01:05 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:08:35.398 16:01:05 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:08:35.398 16:01:05 -- setup/common.sh@18 -- # local node= 00:08:35.398 16:01:05 -- setup/common.sh@19 -- # local var val 00:08:35.398 16:01:05 -- setup/common.sh@20 -- # local mem_f mem 00:08:35.398 16:01:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:35.398 16:01:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:35.398 16:01:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:35.398 16:01:05 -- setup/common.sh@28 -- # mapfile -t mem 00:08:35.398 16:01:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.398 16:01:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 7014092 kB' 'MemAvailable: 9526640 kB' 'Buffers: 2436 kB' 'Cached: 2711124 kB' 'SwapCached: 0 kB' 'Active: 443700 kB' 'Inactive: 2393356 kB' 'Active(anon): 123300 kB' 'Inactive(anon): 10664 kB' 'Active(file): 320400 kB' 'Inactive(file): 2382692 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1324 kB' 'Writeback: 0 kB' 'AnonPages: 123572 kB' 'Mapped: 49084 kB' 'Shmem: 10468 kB' 'KReclaimable: 92492 kB' 'Slab: 170612 kB' 'SReclaimable: 92492 kB' 'SUnreclaim: 78120 kB' 'KernelStack: 4720 kB' 'PageTables: 3404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13455124 kB' 'Committed_AS: 355044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53360 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 8212480 kB' 'DirectMap1G: 6291456 kB' 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.398 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.398 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:35.399 16:01:05 -- setup/common.sh@33 -- # echo 0 00:08:35.399 16:01:05 -- setup/common.sh@33 -- # return 0 00:08:35.399 16:01:05 -- setup/hugepages.sh@100 -- # resv=0 00:08:35.399 nr_hugepages=1025 00:08:35.399 16:01:05 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:08:35.399 resv_hugepages=0 00:08:35.399 16:01:05 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:08:35.399 surplus_hugepages=0 00:08:35.399 16:01:05 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:08:35.399 anon_hugepages=0 00:08:35.399 16:01:05 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:08:35.399 16:01:05 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:08:35.399 16:01:05 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:08:35.399 16:01:05 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:08:35.399 16:01:05 -- setup/common.sh@17 -- # local get=HugePages_Total 00:08:35.399 16:01:05 -- setup/common.sh@18 -- # local node= 00:08:35.399 16:01:05 -- setup/common.sh@19 -- # local var val 00:08:35.399 16:01:05 -- setup/common.sh@20 -- # local mem_f mem 00:08:35.399 16:01:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:35.399 16:01:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:35.399 16:01:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:35.399 16:01:05 -- setup/common.sh@28 -- # mapfile -t mem 00:08:35.399 16:01:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.399 16:01:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 7014092 kB' 'MemAvailable: 9526640 kB' 'Buffers: 2436 kB' 'Cached: 2711124 kB' 'SwapCached: 0 kB' 'Active: 443972 kB' 'Inactive: 2393356 kB' 'Active(anon): 123572 kB' 'Inactive(anon): 10664 kB' 'Active(file): 320400 kB' 'Inactive(file): 2382692 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1324 kB' 'Writeback: 0 kB' 'AnonPages: 123900 kB' 'Mapped: 49084 kB' 'Shmem: 10468 kB' 'KReclaimable: 92492 kB' 'Slab: 170612 kB' 'SReclaimable: 92492 kB' 'SUnreclaim: 78120 kB' 'KernelStack: 4736 kB' 'PageTables: 3436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13455124 kB' 'Committed_AS: 355044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53360 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 8212480 kB' 'DirectMap1G: 6291456 kB' 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.399 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.399 16:01:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.400 16:01:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:35.400 16:01:05 -- setup/common.sh@33 -- # echo 1025 00:08:35.400 16:01:05 -- setup/common.sh@33 -- # return 0 00:08:35.400 16:01:05 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:08:35.400 16:01:05 -- setup/hugepages.sh@112 -- # get_nodes 00:08:35.400 16:01:05 -- setup/hugepages.sh@27 -- # local node 00:08:35.400 16:01:05 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:35.400 16:01:05 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:08:35.400 16:01:05 -- setup/hugepages.sh@32 -- # no_nodes=1 00:08:35.400 16:01:05 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:08:35.400 16:01:05 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:08:35.400 16:01:05 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:08:35.400 16:01:05 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:08:35.400 16:01:05 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:35.400 16:01:05 -- setup/common.sh@18 -- # local node=0 00:08:35.400 16:01:05 -- setup/common.sh@19 -- # local var val 00:08:35.400 16:01:05 -- setup/common.sh@20 -- # local mem_f mem 00:08:35.400 16:01:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:35.400 16:01:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:08:35.400 16:01:05 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:08:35.400 16:01:05 -- setup/common.sh@28 -- # mapfile -t mem 00:08:35.400 16:01:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.400 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.401 16:01:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 7014092 kB' 'MemUsed: 5218152 kB' 'SwapCached: 0 kB' 'Active: 443900 kB' 'Inactive: 2393356 kB' 'Active(anon): 123500 kB' 'Inactive(anon): 10664 kB' 'Active(file): 320400 kB' 'Inactive(file): 2382692 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 1324 kB' 'Writeback: 0 kB' 'FilePages: 2713560 kB' 'Mapped: 49084 kB' 'AnonPages: 123840 kB' 'Shmem: 10468 kB' 'KernelStack: 4720 kB' 'PageTables: 3404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 92492 kB' 'Slab: 170612 kB' 'SReclaimable: 92492 kB' 'SUnreclaim: 78120 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.401 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.401 16:01:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:35.401 16:01:05 -- setup/common.sh@33 -- # echo 0 00:08:35.401 16:01:05 -- setup/common.sh@33 -- # return 0 00:08:35.401 16:01:05 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:08:35.401 16:01:05 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:08:35.401 16:01:05 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:08:35.401 16:01:05 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:08:35.401 node0=1025 expecting 1025 00:08:35.401 16:01:05 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:08:35.401 16:01:05 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:08:35.401 00:08:35.401 real 0m0.594s 00:08:35.401 user 0m0.311s 00:08:35.401 sys 0m0.320s 00:08:35.401 16:01:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:35.401 16:01:05 -- common/autotest_common.sh@10 -- # set +x 00:08:35.401 ************************************ 00:08:35.401 END TEST odd_alloc 00:08:35.401 ************************************ 00:08:35.401 16:01:05 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:08:35.401 16:01:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:35.401 16:01:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:35.401 16:01:05 -- common/autotest_common.sh@10 -- # set +x 00:08:35.659 ************************************ 00:08:35.659 START TEST custom_alloc 00:08:35.659 ************************************ 00:08:35.659 16:01:05 -- common/autotest_common.sh@1111 -- # custom_alloc 00:08:35.659 16:01:05 -- setup/hugepages.sh@167 -- # local IFS=, 00:08:35.659 16:01:05 -- setup/hugepages.sh@169 -- # local node 00:08:35.659 16:01:05 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:08:35.659 16:01:05 -- setup/hugepages.sh@170 -- # local nodes_hp 00:08:35.659 16:01:05 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:08:35.659 16:01:05 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:08:35.659 16:01:05 -- setup/hugepages.sh@49 -- # local size=1048576 00:08:35.659 16:01:05 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:08:35.659 16:01:05 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:08:35.659 16:01:05 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:08:35.659 16:01:05 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:08:35.659 16:01:05 -- setup/hugepages.sh@62 -- # user_nodes=() 00:08:35.659 16:01:05 -- setup/hugepages.sh@62 -- # local user_nodes 00:08:35.659 16:01:05 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:08:35.659 16:01:05 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:08:35.659 16:01:05 -- setup/hugepages.sh@67 -- # nodes_test=() 00:08:35.659 16:01:05 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:08:35.659 16:01:05 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:08:35.659 16:01:05 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:08:35.659 16:01:05 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:08:35.659 16:01:05 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:08:35.659 16:01:05 -- setup/hugepages.sh@83 -- # : 0 00:08:35.659 16:01:05 -- setup/hugepages.sh@84 -- # : 0 00:08:35.659 16:01:05 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:08:35.659 16:01:05 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:08:35.659 16:01:05 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:08:35.659 16:01:05 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:08:35.659 16:01:05 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:08:35.659 16:01:05 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:08:35.659 16:01:05 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:08:35.659 16:01:05 -- setup/hugepages.sh@62 -- # user_nodes=() 00:08:35.659 16:01:05 -- setup/hugepages.sh@62 -- # local user_nodes 00:08:35.659 16:01:05 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:08:35.659 16:01:05 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:08:35.659 16:01:05 -- setup/hugepages.sh@67 -- # nodes_test=() 00:08:35.659 16:01:05 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:08:35.659 16:01:05 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:08:35.659 16:01:05 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:08:35.659 16:01:05 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:08:35.659 16:01:05 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:08:35.659 16:01:05 -- setup/hugepages.sh@78 -- # return 0 00:08:35.659 16:01:05 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:08:35.659 16:01:05 -- setup/hugepages.sh@187 -- # setup output 00:08:35.659 16:01:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:35.659 16:01:05 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:35.917 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:35.917 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:35.917 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:35.917 16:01:05 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:08:35.917 16:01:05 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:08:35.917 16:01:05 -- setup/hugepages.sh@89 -- # local node 00:08:35.917 16:01:05 -- setup/hugepages.sh@90 -- # local sorted_t 00:08:35.917 16:01:05 -- setup/hugepages.sh@91 -- # local sorted_s 00:08:35.917 16:01:05 -- setup/hugepages.sh@92 -- # local surp 00:08:35.917 16:01:05 -- setup/hugepages.sh@93 -- # local resv 00:08:35.917 16:01:05 -- setup/hugepages.sh@94 -- # local anon 00:08:35.917 16:01:05 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:08:35.917 16:01:05 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:08:35.917 16:01:05 -- setup/common.sh@17 -- # local get=AnonHugePages 00:08:35.917 16:01:05 -- setup/common.sh@18 -- # local node= 00:08:35.917 16:01:05 -- setup/common.sh@19 -- # local var val 00:08:35.917 16:01:05 -- setup/common.sh@20 -- # local mem_f mem 00:08:35.917 16:01:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:35.917 16:01:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:35.917 16:01:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:35.917 16:01:05 -- setup/common.sh@28 -- # mapfile -t mem 00:08:35.917 16:01:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:35.918 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.918 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.918 16:01:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 8059416 kB' 'MemAvailable: 10571972 kB' 'Buffers: 2436 kB' 'Cached: 2711132 kB' 'SwapCached: 0 kB' 'Active: 444800 kB' 'Inactive: 2393368 kB' 'Active(anon): 124400 kB' 'Inactive(anon): 10668 kB' 'Active(file): 320400 kB' 'Inactive(file): 2382700 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1448 kB' 'Writeback: 0 kB' 'AnonPages: 124180 kB' 'Mapped: 49300 kB' 'Shmem: 10468 kB' 'KReclaimable: 92492 kB' 'Slab: 170612 kB' 'SReclaimable: 92492 kB' 'SUnreclaim: 78120 kB' 'KernelStack: 4964 kB' 'PageTables: 4040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13980436 kB' 'Committed_AS: 355172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53376 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 8212480 kB' 'DirectMap1G: 6291456 kB' 00:08:35.918 16:01:05 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.918 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.918 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.918 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.918 16:01:05 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.918 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.918 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.918 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.918 16:01:05 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.918 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.918 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.918 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.918 16:01:05 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.918 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.918 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.918 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.918 16:01:05 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.918 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.918 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.918 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.918 16:01:05 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.918 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.918 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.918 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.918 16:01:05 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.918 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.918 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.918 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.918 16:01:05 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.918 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.918 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.918 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.918 16:01:05 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.918 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.918 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.918 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.918 16:01:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.918 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.918 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.918 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.918 16:01:05 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.918 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.918 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.918 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.918 16:01:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.918 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.918 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.918 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.918 16:01:05 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.918 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.918 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.918 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.918 16:01:05 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.918 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.918 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.918 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.918 16:01:05 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.918 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.918 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.918 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.918 16:01:05 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.918 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.918 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.918 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.918 16:01:05 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.918 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.918 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.918 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.918 16:01:05 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.918 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.918 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.918 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.918 16:01:05 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.918 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.918 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.918 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.918 16:01:05 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.918 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.918 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.918 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.918 16:01:05 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.918 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.918 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.918 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.918 16:01:05 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.918 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.918 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.919 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.919 16:01:05 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.919 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.919 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.919 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.919 16:01:05 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.919 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.919 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.919 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.919 16:01:05 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.919 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.919 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.919 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.919 16:01:05 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.919 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.919 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.919 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.919 16:01:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.919 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.919 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.919 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.919 16:01:05 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.919 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.919 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.919 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.919 16:01:05 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.919 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.919 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.919 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.919 16:01:05 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.919 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.919 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.919 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.919 16:01:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.919 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.919 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.919 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.919 16:01:05 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.919 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.919 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.919 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.919 16:01:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.919 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.919 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.919 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.919 16:01:05 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.919 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.919 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.919 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.919 16:01:05 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.919 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.919 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.919 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.919 16:01:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.919 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.919 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.919 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.919 16:01:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.919 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.919 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.919 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.919 16:01:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.919 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.919 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.919 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.919 16:01:05 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.919 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.919 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.919 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.919 16:01:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.919 16:01:05 -- setup/common.sh@32 -- # continue 00:08:35.919 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:35.919 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:35.919 16:01:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:35.919 16:01:05 -- setup/common.sh@33 -- # echo 0 00:08:35.919 16:01:05 -- setup/common.sh@33 -- # return 0 00:08:36.181 16:01:05 -- setup/hugepages.sh@97 -- # anon=0 00:08:36.181 16:01:05 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:08:36.181 16:01:05 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:36.181 16:01:05 -- setup/common.sh@18 -- # local node= 00:08:36.181 16:01:05 -- setup/common.sh@19 -- # local var val 00:08:36.181 16:01:05 -- setup/common.sh@20 -- # local mem_f mem 00:08:36.181 16:01:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:36.181 16:01:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:36.181 16:01:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:36.181 16:01:05 -- setup/common.sh@28 -- # mapfile -t mem 00:08:36.181 16:01:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:36.181 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.181 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.181 16:01:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 8059416 kB' 'MemAvailable: 10571972 kB' 'Buffers: 2436 kB' 'Cached: 2711132 kB' 'SwapCached: 0 kB' 'Active: 444536 kB' 'Inactive: 2393368 kB' 'Active(anon): 124136 kB' 'Inactive(anon): 10668 kB' 'Active(file): 320400 kB' 'Inactive(file): 2382700 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1448 kB' 'Writeback: 0 kB' 'AnonPages: 124184 kB' 'Mapped: 49240 kB' 'Shmem: 10468 kB' 'KReclaimable: 92492 kB' 'Slab: 170612 kB' 'SReclaimable: 92492 kB' 'SUnreclaim: 78120 kB' 'KernelStack: 4900 kB' 'PageTables: 3876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13980436 kB' 'Committed_AS: 355172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53360 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 8212480 kB' 'DirectMap1G: 6291456 kB' 00:08:36.181 16:01:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.181 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.181 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.181 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.181 16:01:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.181 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.181 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.181 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.181 16:01:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.181 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.181 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.181 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.181 16:01:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.181 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.181 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.181 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.181 16:01:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.181 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.181 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.181 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.181 16:01:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.181 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.181 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.181 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.181 16:01:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.181 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.181 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.181 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.181 16:01:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.181 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.181 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.181 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.181 16:01:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.181 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.182 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.182 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.183 16:01:05 -- setup/common.sh@33 -- # echo 0 00:08:36.183 16:01:05 -- setup/common.sh@33 -- # return 0 00:08:36.183 16:01:05 -- setup/hugepages.sh@99 -- # surp=0 00:08:36.183 16:01:05 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:08:36.183 16:01:05 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:08:36.183 16:01:05 -- setup/common.sh@18 -- # local node= 00:08:36.183 16:01:05 -- setup/common.sh@19 -- # local var val 00:08:36.183 16:01:05 -- setup/common.sh@20 -- # local mem_f mem 00:08:36.183 16:01:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:36.183 16:01:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:36.183 16:01:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:36.183 16:01:05 -- setup/common.sh@28 -- # mapfile -t mem 00:08:36.183 16:01:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.183 16:01:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 8059416 kB' 'MemAvailable: 10571972 kB' 'Buffers: 2436 kB' 'Cached: 2711132 kB' 'SwapCached: 0 kB' 'Active: 444648 kB' 'Inactive: 2393368 kB' 'Active(anon): 124248 kB' 'Inactive(anon): 10668 kB' 'Active(file): 320400 kB' 'Inactive(file): 2382700 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1448 kB' 'Writeback: 0 kB' 'AnonPages: 124304 kB' 'Mapped: 49296 kB' 'Shmem: 10468 kB' 'KReclaimable: 92492 kB' 'Slab: 170612 kB' 'SReclaimable: 92492 kB' 'SUnreclaim: 78120 kB' 'KernelStack: 4900 kB' 'PageTables: 3860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13980436 kB' 'Committed_AS: 357048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53360 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 8212480 kB' 'DirectMap1G: 6291456 kB' 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.183 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.183 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.184 16:01:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.184 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.184 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.184 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.184 16:01:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.184 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.184 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.184 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.184 16:01:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.184 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.184 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.184 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.184 16:01:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.184 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.184 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.184 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.184 16:01:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.184 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.184 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.184 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.184 16:01:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.184 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.184 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.184 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.184 16:01:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.184 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.184 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.184 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.184 16:01:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.184 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.184 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.184 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.184 16:01:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.184 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.184 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.184 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.184 16:01:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.184 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.184 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.184 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.184 16:01:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.184 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.184 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.184 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.184 16:01:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.184 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.184 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.184 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.184 16:01:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.184 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.184 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.184 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.184 16:01:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.184 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.184 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.184 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.184 16:01:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.184 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.184 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.184 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.184 16:01:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.184 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.184 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.184 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.184 16:01:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.184 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.184 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.184 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.184 16:01:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.184 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.184 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.184 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.184 16:01:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.184 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.184 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.184 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.184 16:01:05 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.184 16:01:05 -- setup/common.sh@33 -- # echo 0 00:08:36.184 16:01:05 -- setup/common.sh@33 -- # return 0 00:08:36.184 16:01:05 -- setup/hugepages.sh@100 -- # resv=0 00:08:36.184 nr_hugepages=512 00:08:36.184 16:01:05 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:08:36.184 resv_hugepages=0 00:08:36.184 16:01:05 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:08:36.184 surplus_hugepages=0 00:08:36.184 16:01:05 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:08:36.184 anon_hugepages=0 00:08:36.184 16:01:05 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:08:36.184 16:01:05 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:08:36.184 16:01:05 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:08:36.184 16:01:05 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:08:36.184 16:01:05 -- setup/common.sh@17 -- # local get=HugePages_Total 00:08:36.184 16:01:05 -- setup/common.sh@18 -- # local node= 00:08:36.184 16:01:05 -- setup/common.sh@19 -- # local var val 00:08:36.184 16:01:05 -- setup/common.sh@20 -- # local mem_f mem 00:08:36.184 16:01:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:36.184 16:01:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:36.184 16:01:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:36.184 16:01:05 -- setup/common.sh@28 -- # mapfile -t mem 00:08:36.184 16:01:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:36.184 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.184 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.184 16:01:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 8059416 kB' 'MemAvailable: 10571968 kB' 'Buffers: 2436 kB' 'Cached: 2711128 kB' 'SwapCached: 0 kB' 'Active: 444140 kB' 'Inactive: 2393364 kB' 'Active(anon): 123740 kB' 'Inactive(anon): 10668 kB' 'Active(file): 320400 kB' 'Inactive(file): 2382696 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1448 kB' 'Writeback: 0 kB' 'AnonPages: 124064 kB' 'Mapped: 49296 kB' 'Shmem: 10468 kB' 'KReclaimable: 92492 kB' 'Slab: 170612 kB' 'SReclaimable: 92492 kB' 'SUnreclaim: 78120 kB' 'KernelStack: 4804 kB' 'PageTables: 3664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13980436 kB' 'Committed_AS: 355172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53328 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 8212480 kB' 'DirectMap1G: 6291456 kB' 00:08:36.184 16:01:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.184 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.184 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.184 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.184 16:01:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.184 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.184 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.184 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.184 16:01:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.184 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.184 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.184 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.184 16:01:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.184 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.184 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.184 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.184 16:01:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.184 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.184 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.184 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.184 16:01:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.184 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.184 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.184 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.184 16:01:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.184 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.184 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.184 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.184 16:01:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.184 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.184 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.184 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.184 16:01:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.184 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.184 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.184 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.184 16:01:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.184 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.184 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.184 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.184 16:01:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.184 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.184 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.184 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.184 16:01:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.184 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.185 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.185 16:01:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.185 16:01:05 -- setup/common.sh@33 -- # echo 512 00:08:36.185 16:01:05 -- setup/common.sh@33 -- # return 0 00:08:36.185 16:01:05 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:08:36.185 16:01:05 -- setup/hugepages.sh@112 -- # get_nodes 00:08:36.185 16:01:05 -- setup/hugepages.sh@27 -- # local node 00:08:36.185 16:01:05 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:36.185 16:01:05 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:08:36.185 16:01:05 -- setup/hugepages.sh@32 -- # no_nodes=1 00:08:36.185 16:01:05 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:08:36.185 16:01:05 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:08:36.185 16:01:05 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:08:36.185 16:01:05 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:08:36.185 16:01:05 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:36.186 16:01:05 -- setup/common.sh@18 -- # local node=0 00:08:36.186 16:01:05 -- setup/common.sh@19 -- # local var val 00:08:36.186 16:01:05 -- setup/common.sh@20 -- # local mem_f mem 00:08:36.186 16:01:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:36.186 16:01:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:08:36.186 16:01:05 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:08:36.186 16:01:05 -- setup/common.sh@28 -- # mapfile -t mem 00:08:36.186 16:01:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:36.186 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.186 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.186 16:01:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 8059164 kB' 'MemUsed: 4173080 kB' 'SwapCached: 0 kB' 'Active: 444232 kB' 'Inactive: 2393356 kB' 'Active(anon): 123832 kB' 'Inactive(anon): 10660 kB' 'Active(file): 320400 kB' 'Inactive(file): 2382696 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 1448 kB' 'Writeback: 0 kB' 'FilePages: 2713564 kB' 'Mapped: 49196 kB' 'AnonPages: 123836 kB' 'Shmem: 10468 kB' 'KernelStack: 4800 kB' 'PageTables: 3816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 92492 kB' 'Slab: 170612 kB' 'SReclaimable: 92492 kB' 'SUnreclaim: 78120 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:08:36.186 16:01:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.186 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.186 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.186 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.186 16:01:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.186 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.186 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.186 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.186 16:01:05 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.186 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.186 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.186 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.186 16:01:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.186 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.186 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.186 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.186 16:01:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.186 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.186 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.186 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.186 16:01:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.186 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.186 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.186 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.186 16:01:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.186 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.186 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.186 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.186 16:01:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.186 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.186 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.186 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.186 16:01:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.186 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.186 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.186 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.186 16:01:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.186 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.186 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.186 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.186 16:01:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.186 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.186 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.186 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.186 16:01:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.186 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.186 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.186 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.186 16:01:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.186 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.186 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.186 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.186 16:01:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.186 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.186 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.186 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.186 16:01:05 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.186 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.186 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.186 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.186 16:01:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.186 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.186 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.186 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.186 16:01:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.186 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.186 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.186 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.186 16:01:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.186 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.186 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.186 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.186 16:01:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.186 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.186 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.186 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.186 16:01:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.186 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.186 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.186 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.186 16:01:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.186 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.186 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.186 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.186 16:01:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.186 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.186 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.186 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.186 16:01:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.186 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.186 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.186 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.186 16:01:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.186 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.186 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.186 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.186 16:01:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.186 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.186 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.186 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.186 16:01:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.186 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.186 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.186 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.186 16:01:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.186 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.186 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.186 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.186 16:01:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.186 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.186 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.186 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.186 16:01:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.186 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.186 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.186 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.187 16:01:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.187 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.187 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.187 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.187 16:01:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.187 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.187 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.187 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.187 16:01:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.187 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.187 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.187 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.187 16:01:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.187 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.187 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.187 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.187 16:01:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.187 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.187 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.187 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.187 16:01:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.187 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.187 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.187 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.187 16:01:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.187 16:01:05 -- setup/common.sh@32 -- # continue 00:08:36.187 16:01:05 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.187 16:01:05 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.187 16:01:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.187 16:01:05 -- setup/common.sh@33 -- # echo 0 00:08:36.187 16:01:05 -- setup/common.sh@33 -- # return 0 00:08:36.187 16:01:05 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:08:36.187 16:01:05 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:08:36.187 16:01:05 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:08:36.187 16:01:05 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:08:36.187 node0=512 expecting 512 00:08:36.187 16:01:06 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:08:36.187 16:01:06 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:08:36.187 00:08:36.187 real 0m0.588s 00:08:36.187 user 0m0.280s 00:08:36.187 sys 0m0.348s 00:08:36.187 16:01:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:36.187 16:01:06 -- common/autotest_common.sh@10 -- # set +x 00:08:36.187 ************************************ 00:08:36.187 END TEST custom_alloc 00:08:36.187 ************************************ 00:08:36.187 16:01:06 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:08:36.187 16:01:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:36.187 16:01:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:36.187 16:01:06 -- common/autotest_common.sh@10 -- # set +x 00:08:36.187 ************************************ 00:08:36.187 START TEST no_shrink_alloc 00:08:36.187 ************************************ 00:08:36.187 16:01:06 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:08:36.187 16:01:06 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:08:36.187 16:01:06 -- setup/hugepages.sh@49 -- # local size=2097152 00:08:36.187 16:01:06 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:08:36.187 16:01:06 -- setup/hugepages.sh@51 -- # shift 00:08:36.187 16:01:06 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:08:36.187 16:01:06 -- setup/hugepages.sh@52 -- # local node_ids 00:08:36.187 16:01:06 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:08:36.187 16:01:06 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:08:36.187 16:01:06 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:08:36.187 16:01:06 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:08:36.187 16:01:06 -- setup/hugepages.sh@62 -- # local user_nodes 00:08:36.187 16:01:06 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:08:36.187 16:01:06 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:08:36.187 16:01:06 -- setup/hugepages.sh@67 -- # nodes_test=() 00:08:36.187 16:01:06 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:08:36.187 16:01:06 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:08:36.187 16:01:06 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:08:36.187 16:01:06 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:08:36.187 16:01:06 -- setup/hugepages.sh@73 -- # return 0 00:08:36.187 16:01:06 -- setup/hugepages.sh@198 -- # setup output 00:08:36.187 16:01:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:36.187 16:01:06 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:36.761 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:36.761 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:36.761 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:36.761 16:01:06 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:08:36.761 16:01:06 -- setup/hugepages.sh@89 -- # local node 00:08:36.761 16:01:06 -- setup/hugepages.sh@90 -- # local sorted_t 00:08:36.761 16:01:06 -- setup/hugepages.sh@91 -- # local sorted_s 00:08:36.761 16:01:06 -- setup/hugepages.sh@92 -- # local surp 00:08:36.761 16:01:06 -- setup/hugepages.sh@93 -- # local resv 00:08:36.761 16:01:06 -- setup/hugepages.sh@94 -- # local anon 00:08:36.761 16:01:06 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:08:36.761 16:01:06 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:08:36.761 16:01:06 -- setup/common.sh@17 -- # local get=AnonHugePages 00:08:36.761 16:01:06 -- setup/common.sh@18 -- # local node= 00:08:36.761 16:01:06 -- setup/common.sh@19 -- # local var val 00:08:36.761 16:01:06 -- setup/common.sh@20 -- # local mem_f mem 00:08:36.761 16:01:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:36.761 16:01:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:36.761 16:01:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:36.761 16:01:06 -- setup/common.sh@28 -- # mapfile -t mem 00:08:36.761 16:01:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:36.761 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.761 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.761 16:01:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 7005808 kB' 'MemAvailable: 9518360 kB' 'Buffers: 2436 kB' 'Cached: 2711132 kB' 'SwapCached: 0 kB' 'Active: 439952 kB' 'Inactive: 2393388 kB' 'Active(anon): 119552 kB' 'Inactive(anon): 10688 kB' 'Active(file): 320400 kB' 'Inactive(file): 2382700 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1592 kB' 'Writeback: 0 kB' 'AnonPages: 120172 kB' 'Mapped: 48400 kB' 'Shmem: 10468 kB' 'KReclaimable: 92488 kB' 'Slab: 170484 kB' 'SReclaimable: 92488 kB' 'SUnreclaim: 77996 kB' 'KernelStack: 4720 kB' 'PageTables: 3112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 342796 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53312 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 8212480 kB' 'DirectMap1G: 6291456 kB' 00:08:36.761 16:01:06 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.761 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.761 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.761 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.761 16:01:06 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.761 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.761 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.761 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.761 16:01:06 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.761 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.761 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.761 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.761 16:01:06 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.761 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.761 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.761 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.761 16:01:06 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.761 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.761 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.761 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.761 16:01:06 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.761 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.761 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.761 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.761 16:01:06 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.761 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.761 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.761 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.761 16:01:06 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.761 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.761 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.761 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.761 16:01:06 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.761 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.761 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.761 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.761 16:01:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.761 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.761 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.761 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.761 16:01:06 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.761 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.761 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.761 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.761 16:01:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.761 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.761 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.761 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.761 16:01:06 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.761 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.761 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.761 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.761 16:01:06 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.761 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.761 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.761 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.761 16:01:06 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.761 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.761 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.761 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.761 16:01:06 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.761 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.761 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.761 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.761 16:01:06 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.761 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.761 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.761 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.761 16:01:06 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.761 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.761 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.761 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.761 16:01:06 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.761 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.761 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.761 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.761 16:01:06 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.761 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.761 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.761 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.761 16:01:06 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.761 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.761 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.761 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.761 16:01:06 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.761 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.761 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.761 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.761 16:01:06 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.761 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.761 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.761 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.761 16:01:06 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.761 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.761 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.761 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.761 16:01:06 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.761 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.761 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.761 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.761 16:01:06 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.761 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.761 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:36.762 16:01:06 -- setup/common.sh@33 -- # echo 0 00:08:36.762 16:01:06 -- setup/common.sh@33 -- # return 0 00:08:36.762 16:01:06 -- setup/hugepages.sh@97 -- # anon=0 00:08:36.762 16:01:06 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:08:36.762 16:01:06 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:36.762 16:01:06 -- setup/common.sh@18 -- # local node= 00:08:36.762 16:01:06 -- setup/common.sh@19 -- # local var val 00:08:36.762 16:01:06 -- setup/common.sh@20 -- # local mem_f mem 00:08:36.762 16:01:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:36.762 16:01:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:36.762 16:01:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:36.762 16:01:06 -- setup/common.sh@28 -- # mapfile -t mem 00:08:36.762 16:01:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.762 16:01:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 7005808 kB' 'MemAvailable: 9518360 kB' 'Buffers: 2436 kB' 'Cached: 2711132 kB' 'SwapCached: 0 kB' 'Active: 439728 kB' 'Inactive: 2393380 kB' 'Active(anon): 119328 kB' 'Inactive(anon): 10680 kB' 'Active(file): 320400 kB' 'Inactive(file): 2382700 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1592 kB' 'Writeback: 0 kB' 'AnonPages: 119372 kB' 'Mapped: 48568 kB' 'Shmem: 10468 kB' 'KReclaimable: 92488 kB' 'Slab: 170480 kB' 'SReclaimable: 92488 kB' 'SUnreclaim: 77992 kB' 'KernelStack: 4652 kB' 'PageTables: 3116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 339760 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53248 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 8212480 kB' 'DirectMap1G: 6291456 kB' 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.762 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.762 16:01:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.763 16:01:06 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.763 16:01:06 -- setup/common.sh@33 -- # echo 0 00:08:36.763 16:01:06 -- setup/common.sh@33 -- # return 0 00:08:36.763 16:01:06 -- setup/hugepages.sh@99 -- # surp=0 00:08:36.763 16:01:06 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:08:36.763 16:01:06 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:08:36.763 16:01:06 -- setup/common.sh@18 -- # local node= 00:08:36.763 16:01:06 -- setup/common.sh@19 -- # local var val 00:08:36.763 16:01:06 -- setup/common.sh@20 -- # local mem_f mem 00:08:36.763 16:01:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:36.763 16:01:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:36.763 16:01:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:36.763 16:01:06 -- setup/common.sh@28 -- # mapfile -t mem 00:08:36.763 16:01:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.763 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.764 16:01:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 7005808 kB' 'MemAvailable: 9518360 kB' 'Buffers: 2436 kB' 'Cached: 2711132 kB' 'SwapCached: 0 kB' 'Active: 439528 kB' 'Inactive: 2393380 kB' 'Active(anon): 119128 kB' 'Inactive(anon): 10680 kB' 'Active(file): 320400 kB' 'Inactive(file): 2382700 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1592 kB' 'Writeback: 0 kB' 'AnonPages: 119732 kB' 'Mapped: 48380 kB' 'Shmem: 10468 kB' 'KReclaimable: 92488 kB' 'Slab: 170464 kB' 'SReclaimable: 92488 kB' 'SUnreclaim: 77976 kB' 'KernelStack: 4636 kB' 'PageTables: 3028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 339760 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53248 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 8212480 kB' 'DirectMap1G: 6291456 kB' 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.764 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.764 16:01:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.765 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.765 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.765 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.765 16:01:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.765 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.765 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.765 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.765 16:01:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.765 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.765 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.765 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.765 16:01:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.765 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.765 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.765 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.765 16:01:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.765 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.765 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.765 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.765 16:01:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.765 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.765 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.765 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.765 16:01:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.765 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.765 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.765 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.765 16:01:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.765 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.765 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.765 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.765 16:01:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.765 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.765 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.765 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.765 16:01:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.765 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.765 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.765 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.765 16:01:06 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:36.765 16:01:06 -- setup/common.sh@33 -- # echo 0 00:08:36.765 16:01:06 -- setup/common.sh@33 -- # return 0 00:08:36.765 16:01:06 -- setup/hugepages.sh@100 -- # resv=0 00:08:36.765 16:01:06 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:08:36.765 nr_hugepages=1024 00:08:36.765 resv_hugepages=0 00:08:36.765 16:01:06 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:08:36.765 surplus_hugepages=0 00:08:36.765 16:01:06 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:08:36.765 anon_hugepages=0 00:08:36.765 16:01:06 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:08:36.765 16:01:06 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:36.765 16:01:06 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:08:36.765 16:01:06 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:08:36.765 16:01:06 -- setup/common.sh@17 -- # local get=HugePages_Total 00:08:36.765 16:01:06 -- setup/common.sh@18 -- # local node= 00:08:36.765 16:01:06 -- setup/common.sh@19 -- # local var val 00:08:36.765 16:01:06 -- setup/common.sh@20 -- # local mem_f mem 00:08:36.765 16:01:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:36.765 16:01:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:36.765 16:01:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:36.765 16:01:06 -- setup/common.sh@28 -- # mapfile -t mem 00:08:36.765 16:01:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:36.765 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.765 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.765 16:01:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 7005808 kB' 'MemAvailable: 9518360 kB' 'Buffers: 2436 kB' 'Cached: 2711132 kB' 'SwapCached: 0 kB' 'Active: 439700 kB' 'Inactive: 2393380 kB' 'Active(anon): 119300 kB' 'Inactive(anon): 10680 kB' 'Active(file): 320400 kB' 'Inactive(file): 2382700 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1592 kB' 'Writeback: 0 kB' 'AnonPages: 119600 kB' 'Mapped: 48380 kB' 'Shmem: 10468 kB' 'KReclaimable: 92488 kB' 'Slab: 170464 kB' 'SReclaimable: 92488 kB' 'SUnreclaim: 77976 kB' 'KernelStack: 4620 kB' 'PageTables: 2996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 339760 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53248 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 8212480 kB' 'DirectMap1G: 6291456 kB' 00:08:36.765 16:01:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.765 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.765 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.765 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.765 16:01:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.765 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.765 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.765 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.765 16:01:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.765 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.765 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.765 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.765 16:01:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.765 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.765 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.765 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.765 16:01:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.765 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.765 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.765 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.765 16:01:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.765 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.765 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.765 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.765 16:01:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.765 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.765 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.765 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.765 16:01:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.765 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.765 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.765 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.765 16:01:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.765 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.765 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.765 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.765 16:01:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.765 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.765 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.765 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.765 16:01:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.765 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.765 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.765 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.765 16:01:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.765 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.765 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.765 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.765 16:01:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.765 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.766 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.766 16:01:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:36.766 16:01:06 -- setup/common.sh@33 -- # echo 1024 00:08:36.766 16:01:06 -- setup/common.sh@33 -- # return 0 00:08:36.766 16:01:06 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:36.766 16:01:06 -- setup/hugepages.sh@112 -- # get_nodes 00:08:36.766 16:01:06 -- setup/hugepages.sh@27 -- # local node 00:08:36.766 16:01:06 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:36.766 16:01:06 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:08:36.766 16:01:06 -- setup/hugepages.sh@32 -- # no_nodes=1 00:08:36.766 16:01:06 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:08:36.766 16:01:06 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:08:36.766 16:01:06 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:08:36.766 16:01:06 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:08:36.766 16:01:06 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:36.766 16:01:06 -- setup/common.sh@18 -- # local node=0 00:08:36.766 16:01:06 -- setup/common.sh@19 -- # local var val 00:08:36.766 16:01:06 -- setup/common.sh@20 -- # local mem_f mem 00:08:36.766 16:01:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:36.767 16:01:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:08:36.767 16:01:06 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:08:36.767 16:01:06 -- setup/common.sh@28 -- # mapfile -t mem 00:08:36.767 16:01:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:36.767 16:01:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 7005808 kB' 'MemUsed: 5226436 kB' 'SwapCached: 0 kB' 'Active: 439700 kB' 'Inactive: 2393380 kB' 'Active(anon): 119300 kB' 'Inactive(anon): 10680 kB' 'Active(file): 320400 kB' 'Inactive(file): 2382700 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 1592 kB' 'Writeback: 0 kB' 'FilePages: 2713568 kB' 'Mapped: 48380 kB' 'AnonPages: 119600 kB' 'Shmem: 10468 kB' 'KernelStack: 4620 kB' 'PageTables: 2996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 92488 kB' 'Slab: 170464 kB' 'SReclaimable: 92488 kB' 'SUnreclaim: 77976 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.767 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.767 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.768 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.768 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.768 16:01:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.768 16:01:06 -- setup/common.sh@32 -- # continue 00:08:36.768 16:01:06 -- setup/common.sh@31 -- # IFS=': ' 00:08:36.768 16:01:06 -- setup/common.sh@31 -- # read -r var val _ 00:08:36.768 16:01:06 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:36.768 16:01:06 -- setup/common.sh@33 -- # echo 0 00:08:36.768 16:01:06 -- setup/common.sh@33 -- # return 0 00:08:36.768 16:01:06 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:08:36.768 16:01:06 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:08:36.768 16:01:06 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:08:36.768 node0=1024 expecting 1024 00:08:36.768 16:01:06 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:08:36.768 16:01:06 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:08:36.768 16:01:06 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:08:36.768 16:01:06 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:08:36.768 16:01:06 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:08:36.768 16:01:06 -- setup/hugepages.sh@202 -- # setup output 00:08:36.768 16:01:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:36.768 16:01:06 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:37.338 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:37.338 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:37.338 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:37.338 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:08:37.339 16:01:07 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:08:37.339 16:01:07 -- setup/hugepages.sh@89 -- # local node 00:08:37.339 16:01:07 -- setup/hugepages.sh@90 -- # local sorted_t 00:08:37.339 16:01:07 -- setup/hugepages.sh@91 -- # local sorted_s 00:08:37.339 16:01:07 -- setup/hugepages.sh@92 -- # local surp 00:08:37.339 16:01:07 -- setup/hugepages.sh@93 -- # local resv 00:08:37.339 16:01:07 -- setup/hugepages.sh@94 -- # local anon 00:08:37.339 16:01:07 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:08:37.339 16:01:07 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:08:37.339 16:01:07 -- setup/common.sh@17 -- # local get=AnonHugePages 00:08:37.339 16:01:07 -- setup/common.sh@18 -- # local node= 00:08:37.339 16:01:07 -- setup/common.sh@19 -- # local var val 00:08:37.339 16:01:07 -- setup/common.sh@20 -- # local mem_f mem 00:08:37.339 16:01:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:37.339 16:01:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:37.339 16:01:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:37.339 16:01:07 -- setup/common.sh@28 -- # mapfile -t mem 00:08:37.339 16:01:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.339 16:01:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 7010204 kB' 'MemAvailable: 9522756 kB' 'Buffers: 2436 kB' 'Cached: 2711132 kB' 'SwapCached: 0 kB' 'Active: 440188 kB' 'Inactive: 2393388 kB' 'Active(anon): 119788 kB' 'Inactive(anon): 10688 kB' 'Active(file): 320400 kB' 'Inactive(file): 2382700 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1592 kB' 'Writeback: 0 kB' 'AnonPages: 120060 kB' 'Mapped: 48760 kB' 'Shmem: 10468 kB' 'KReclaimable: 92488 kB' 'Slab: 170428 kB' 'SReclaimable: 92488 kB' 'SUnreclaim: 77940 kB' 'KernelStack: 4704 kB' 'PageTables: 3208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 339760 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53280 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 8212480 kB' 'DirectMap1G: 6291456 kB' 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:37.339 16:01:07 -- setup/common.sh@33 -- # echo 0 00:08:37.339 16:01:07 -- setup/common.sh@33 -- # return 0 00:08:37.339 16:01:07 -- setup/hugepages.sh@97 -- # anon=0 00:08:37.339 16:01:07 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:08:37.339 16:01:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:37.339 16:01:07 -- setup/common.sh@18 -- # local node= 00:08:37.339 16:01:07 -- setup/common.sh@19 -- # local var val 00:08:37.339 16:01:07 -- setup/common.sh@20 -- # local mem_f mem 00:08:37.339 16:01:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:37.339 16:01:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:37.339 16:01:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:37.339 16:01:07 -- setup/common.sh@28 -- # mapfile -t mem 00:08:37.339 16:01:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:37.339 16:01:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 7010204 kB' 'MemAvailable: 9522756 kB' 'Buffers: 2436 kB' 'Cached: 2711132 kB' 'SwapCached: 0 kB' 'Active: 439928 kB' 'Inactive: 2393380 kB' 'Active(anon): 119528 kB' 'Inactive(anon): 10680 kB' 'Active(file): 320400 kB' 'Inactive(file): 2382700 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1592 kB' 'Writeback: 0 kB' 'AnonPages: 119772 kB' 'Mapped: 48384 kB' 'Shmem: 10468 kB' 'KReclaimable: 92488 kB' 'Slab: 170420 kB' 'SReclaimable: 92488 kB' 'SUnreclaim: 77932 kB' 'KernelStack: 4556 kB' 'PageTables: 3036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 339760 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53248 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 8212480 kB' 'DirectMap1G: 6291456 kB' 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.339 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.339 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.340 16:01:07 -- setup/common.sh@33 -- # echo 0 00:08:37.340 16:01:07 -- setup/common.sh@33 -- # return 0 00:08:37.340 16:01:07 -- setup/hugepages.sh@99 -- # surp=0 00:08:37.340 16:01:07 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:08:37.340 16:01:07 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:08:37.340 16:01:07 -- setup/common.sh@18 -- # local node= 00:08:37.340 16:01:07 -- setup/common.sh@19 -- # local var val 00:08:37.340 16:01:07 -- setup/common.sh@20 -- # local mem_f mem 00:08:37.340 16:01:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:37.340 16:01:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:37.340 16:01:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:37.340 16:01:07 -- setup/common.sh@28 -- # mapfile -t mem 00:08:37.340 16:01:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 7009952 kB' 'MemAvailable: 9522504 kB' 'Buffers: 2436 kB' 'Cached: 2711132 kB' 'SwapCached: 0 kB' 'Active: 439588 kB' 'Inactive: 2393380 kB' 'Active(anon): 119188 kB' 'Inactive(anon): 10680 kB' 'Active(file): 320400 kB' 'Inactive(file): 2382700 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1592 kB' 'Writeback: 0 kB' 'AnonPages: 119484 kB' 'Mapped: 48384 kB' 'Shmem: 10468 kB' 'KReclaimable: 92488 kB' 'Slab: 170420 kB' 'SReclaimable: 92488 kB' 'SUnreclaim: 77932 kB' 'KernelStack: 4572 kB' 'PageTables: 3064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 339760 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53264 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 8212480 kB' 'DirectMap1G: 6291456 kB' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.340 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.340 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:37.341 16:01:07 -- setup/common.sh@33 -- # echo 0 00:08:37.341 16:01:07 -- setup/common.sh@33 -- # return 0 00:08:37.341 16:01:07 -- setup/hugepages.sh@100 -- # resv=0 00:08:37.341 nr_hugepages=1024 00:08:37.341 16:01:07 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:08:37.341 resv_hugepages=0 00:08:37.341 16:01:07 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:08:37.341 16:01:07 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:08:37.341 surplus_hugepages=0 00:08:37.341 anon_hugepages=0 00:08:37.341 16:01:07 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:08:37.341 16:01:07 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:37.341 16:01:07 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:08:37.341 16:01:07 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:08:37.341 16:01:07 -- setup/common.sh@17 -- # local get=HugePages_Total 00:08:37.341 16:01:07 -- setup/common.sh@18 -- # local node= 00:08:37.341 16:01:07 -- setup/common.sh@19 -- # local var val 00:08:37.341 16:01:07 -- setup/common.sh@20 -- # local mem_f mem 00:08:37.341 16:01:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:37.341 16:01:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:37.341 16:01:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:37.341 16:01:07 -- setup/common.sh@28 -- # mapfile -t mem 00:08:37.341 16:01:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.341 16:01:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 7009952 kB' 'MemAvailable: 9522504 kB' 'Buffers: 2436 kB' 'Cached: 2711132 kB' 'SwapCached: 0 kB' 'Active: 439740 kB' 'Inactive: 2393380 kB' 'Active(anon): 119340 kB' 'Inactive(anon): 10680 kB' 'Active(file): 320400 kB' 'Inactive(file): 2382700 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1592 kB' 'Writeback: 0 kB' 'AnonPages: 119628 kB' 'Mapped: 48384 kB' 'Shmem: 10468 kB' 'KReclaimable: 92488 kB' 'Slab: 170420 kB' 'SReclaimable: 92488 kB' 'SUnreclaim: 77932 kB' 'KernelStack: 4556 kB' 'PageTables: 3032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 339760 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53264 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 8212480 kB' 'DirectMap1G: 6291456 kB' 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.341 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.341 16:01:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:37.341 16:01:07 -- setup/common.sh@33 -- # echo 1024 00:08:37.341 16:01:07 -- setup/common.sh@33 -- # return 0 00:08:37.341 16:01:07 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:37.342 16:01:07 -- setup/hugepages.sh@112 -- # get_nodes 00:08:37.342 16:01:07 -- setup/hugepages.sh@27 -- # local node 00:08:37.342 16:01:07 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:37.342 16:01:07 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:08:37.342 16:01:07 -- setup/hugepages.sh@32 -- # no_nodes=1 00:08:37.342 16:01:07 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:08:37.342 16:01:07 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:08:37.342 16:01:07 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:08:37.342 16:01:07 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:08:37.342 16:01:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:37.342 16:01:07 -- setup/common.sh@18 -- # local node=0 00:08:37.342 16:01:07 -- setup/common.sh@19 -- # local var val 00:08:37.342 16:01:07 -- setup/common.sh@20 -- # local mem_f mem 00:08:37.342 16:01:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:37.342 16:01:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:08:37.342 16:01:07 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:08:37.342 16:01:07 -- setup/common.sh@28 -- # mapfile -t mem 00:08:37.342 16:01:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:37.342 16:01:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 7009700 kB' 'MemUsed: 5222544 kB' 'SwapCached: 0 kB' 'Active: 439624 kB' 'Inactive: 2393364 kB' 'Active(anon): 119224 kB' 'Inactive(anon): 10664 kB' 'Active(file): 320400 kB' 'Inactive(file): 2382700 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 1592 kB' 'Writeback: 0 kB' 'FilePages: 2713568 kB' 'Mapped: 48108 kB' 'AnonPages: 119456 kB' 'Shmem: 10468 kB' 'KernelStack: 4608 kB' 'PageTables: 2968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 92488 kB' 'Slab: 170500 kB' 'SReclaimable: 92488 kB' 'SUnreclaim: 78012 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # continue 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # IFS=': ' 00:08:37.342 16:01:07 -- setup/common.sh@31 -- # read -r var val _ 00:08:37.342 16:01:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:37.342 16:01:07 -- setup/common.sh@33 -- # echo 0 00:08:37.342 16:01:07 -- setup/common.sh@33 -- # return 0 00:08:37.342 16:01:07 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:08:37.342 16:01:07 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:08:37.342 16:01:07 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:08:37.342 16:01:07 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:08:37.342 node0=1024 expecting 1024 00:08:37.342 16:01:07 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:08:37.342 16:01:07 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:08:37.342 00:08:37.342 real 0m1.127s 00:08:37.342 user 0m0.581s 00:08:37.342 sys 0m0.619s 00:08:37.342 16:01:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:37.342 16:01:07 -- common/autotest_common.sh@10 -- # set +x 00:08:37.342 ************************************ 00:08:37.342 END TEST no_shrink_alloc 00:08:37.342 ************************************ 00:08:37.342 16:01:07 -- setup/hugepages.sh@217 -- # clear_hp 00:08:37.342 16:01:07 -- setup/hugepages.sh@37 -- # local node hp 00:08:37.342 16:01:07 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:08:37.342 16:01:07 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:08:37.342 16:01:07 -- setup/hugepages.sh@41 -- # echo 0 00:08:37.342 16:01:07 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:08:37.342 16:01:07 -- setup/hugepages.sh@41 -- # echo 0 00:08:37.607 16:01:07 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:08:37.607 16:01:07 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:08:37.607 00:08:37.607 real 0m5.444s 00:08:37.607 user 0m2.496s 00:08:37.607 sys 0m3.014s 00:08:37.607 16:01:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:37.607 16:01:07 -- common/autotest_common.sh@10 -- # set +x 00:08:37.607 ************************************ 00:08:37.607 END TEST hugepages 00:08:37.607 ************************************ 00:08:37.607 16:01:07 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:08:37.607 16:01:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:37.607 16:01:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:37.607 16:01:07 -- common/autotest_common.sh@10 -- # set +x 00:08:37.607 ************************************ 00:08:37.607 START TEST driver 00:08:37.607 ************************************ 00:08:37.607 16:01:07 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:08:37.607 * Looking for test storage... 00:08:37.607 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:08:37.607 16:01:07 -- setup/driver.sh@68 -- # setup reset 00:08:37.607 16:01:07 -- setup/common.sh@9 -- # [[ reset == output ]] 00:08:37.607 16:01:07 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:38.539 16:01:08 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:08:38.539 16:01:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:38.539 16:01:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:38.539 16:01:08 -- common/autotest_common.sh@10 -- # set +x 00:08:38.539 ************************************ 00:08:38.539 START TEST guess_driver 00:08:38.539 ************************************ 00:08:38.539 16:01:08 -- common/autotest_common.sh@1111 -- # guess_driver 00:08:38.539 16:01:08 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:08:38.539 16:01:08 -- setup/driver.sh@47 -- # local fail=0 00:08:38.539 16:01:08 -- setup/driver.sh@49 -- # pick_driver 00:08:38.539 16:01:08 -- setup/driver.sh@36 -- # vfio 00:08:38.539 16:01:08 -- setup/driver.sh@21 -- # local iommu_grups 00:08:38.539 16:01:08 -- setup/driver.sh@22 -- # local unsafe_vfio 00:08:38.539 16:01:08 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:08:38.539 16:01:08 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:08:38.539 16:01:08 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:08:38.539 16:01:08 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:08:38.539 16:01:08 -- setup/driver.sh@32 -- # return 1 00:08:38.539 16:01:08 -- setup/driver.sh@38 -- # uio 00:08:38.539 16:01:08 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:08:38.539 16:01:08 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:08:38.539 16:01:08 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:08:38.539 16:01:08 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:08:38.539 16:01:08 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.5.12-200.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:08:38.539 insmod /lib/modules/6.5.12-200.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:08:38.539 16:01:08 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:08:38.539 16:01:08 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:08:38.539 16:01:08 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:08:38.539 Looking for driver=uio_pci_generic 00:08:38.539 16:01:08 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:08:38.539 16:01:08 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:08:38.539 16:01:08 -- setup/driver.sh@45 -- # setup output config 00:08:38.540 16:01:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:38.540 16:01:08 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:08:39.105 16:01:09 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:08:39.105 16:01:09 -- setup/driver.sh@58 -- # continue 00:08:39.105 16:01:09 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:08:39.362 16:01:09 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:08:39.362 16:01:09 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:08:39.362 16:01:09 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:08:39.362 16:01:09 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:08:39.362 16:01:09 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:08:39.362 16:01:09 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:08:39.362 16:01:09 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:08:39.362 16:01:09 -- setup/driver.sh@65 -- # setup reset 00:08:39.362 16:01:09 -- setup/common.sh@9 -- # [[ reset == output ]] 00:08:39.362 16:01:09 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:40.297 00:08:40.297 real 0m1.671s 00:08:40.297 user 0m0.589s 00:08:40.297 sys 0m1.117s 00:08:40.297 16:01:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:40.297 16:01:09 -- common/autotest_common.sh@10 -- # set +x 00:08:40.297 ************************************ 00:08:40.297 END TEST guess_driver 00:08:40.298 ************************************ 00:08:40.298 00:08:40.298 real 0m2.562s 00:08:40.298 user 0m0.888s 00:08:40.298 sys 0m1.772s 00:08:40.298 16:01:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:40.298 ************************************ 00:08:40.298 END TEST driver 00:08:40.298 16:01:09 -- common/autotest_common.sh@10 -- # set +x 00:08:40.298 ************************************ 00:08:40.298 16:01:10 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:08:40.298 16:01:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:40.298 16:01:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:40.298 16:01:10 -- common/autotest_common.sh@10 -- # set +x 00:08:40.298 ************************************ 00:08:40.298 START TEST devices 00:08:40.298 ************************************ 00:08:40.298 16:01:10 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:08:40.298 * Looking for test storage... 00:08:40.298 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:08:40.298 16:01:10 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:08:40.298 16:01:10 -- setup/devices.sh@192 -- # setup reset 00:08:40.298 16:01:10 -- setup/common.sh@9 -- # [[ reset == output ]] 00:08:40.298 16:01:10 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:41.238 16:01:11 -- setup/devices.sh@194 -- # get_zoned_devs 00:08:41.238 16:01:11 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:08:41.238 16:01:11 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:08:41.238 16:01:11 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:08:41.238 16:01:11 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:41.238 16:01:11 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:08:41.238 16:01:11 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:08:41.238 16:01:11 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:41.238 16:01:11 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:41.238 16:01:11 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:41.238 16:01:11 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n2 00:08:41.238 16:01:11 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:08:41.238 16:01:11 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:08:41.238 16:01:11 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:41.238 16:01:11 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:41.238 16:01:11 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n3 00:08:41.238 16:01:11 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:08:41.238 16:01:11 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:08:41.238 16:01:11 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:41.238 16:01:11 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:41.238 16:01:11 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:08:41.238 16:01:11 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:08:41.238 16:01:11 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:41.238 16:01:11 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:41.238 16:01:11 -- setup/devices.sh@196 -- # blocks=() 00:08:41.238 16:01:11 -- setup/devices.sh@196 -- # declare -a blocks 00:08:41.238 16:01:11 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:08:41.238 16:01:11 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:08:41.238 16:01:11 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:08:41.238 16:01:11 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:08:41.238 16:01:11 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:08:41.238 16:01:11 -- setup/devices.sh@201 -- # ctrl=nvme0 00:08:41.238 16:01:11 -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:08:41.238 16:01:11 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:08:41.238 16:01:11 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:08:41.238 16:01:11 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:08:41.238 16:01:11 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:08:41.238 No valid GPT data, bailing 00:08:41.238 16:01:11 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:08:41.238 16:01:11 -- scripts/common.sh@391 -- # pt= 00:08:41.238 16:01:11 -- scripts/common.sh@392 -- # return 1 00:08:41.238 16:01:11 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:08:41.238 16:01:11 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:41.238 16:01:11 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:41.238 16:01:11 -- setup/common.sh@80 -- # echo 4294967296 00:08:41.238 16:01:11 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:08:41.238 16:01:11 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:08:41.238 16:01:11 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:08:41.238 16:01:11 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:08:41.238 16:01:11 -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:08:41.238 16:01:11 -- setup/devices.sh@201 -- # ctrl=nvme0 00:08:41.238 16:01:11 -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:08:41.238 16:01:11 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:08:41.238 16:01:11 -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:08:41.238 16:01:11 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:08:41.238 16:01:11 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:08:41.238 No valid GPT data, bailing 00:08:41.238 16:01:11 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:08:41.521 16:01:11 -- scripts/common.sh@391 -- # pt= 00:08:41.521 16:01:11 -- scripts/common.sh@392 -- # return 1 00:08:41.521 16:01:11 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:08:41.521 16:01:11 -- setup/common.sh@76 -- # local dev=nvme0n2 00:08:41.521 16:01:11 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:08:41.521 16:01:11 -- setup/common.sh@80 -- # echo 4294967296 00:08:41.521 16:01:11 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:08:41.521 16:01:11 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:08:41.521 16:01:11 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:08:41.521 16:01:11 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:08:41.521 16:01:11 -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:08:41.521 16:01:11 -- setup/devices.sh@201 -- # ctrl=nvme0 00:08:41.521 16:01:11 -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:08:41.521 16:01:11 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:08:41.521 16:01:11 -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:08:41.521 16:01:11 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:08:41.521 16:01:11 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:08:41.521 No valid GPT data, bailing 00:08:41.521 16:01:11 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:08:41.521 16:01:11 -- scripts/common.sh@391 -- # pt= 00:08:41.521 16:01:11 -- scripts/common.sh@392 -- # return 1 00:08:41.521 16:01:11 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:08:41.521 16:01:11 -- setup/common.sh@76 -- # local dev=nvme0n3 00:08:41.521 16:01:11 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:08:41.521 16:01:11 -- setup/common.sh@80 -- # echo 4294967296 00:08:41.521 16:01:11 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:08:41.521 16:01:11 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:08:41.521 16:01:11 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:08:41.521 16:01:11 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:08:41.521 16:01:11 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:08:41.521 16:01:11 -- setup/devices.sh@201 -- # ctrl=nvme1 00:08:41.521 16:01:11 -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:08:41.521 16:01:11 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:08:41.521 16:01:11 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:08:41.521 16:01:11 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:08:41.521 16:01:11 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:08:41.521 No valid GPT data, bailing 00:08:41.521 16:01:11 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:08:41.521 16:01:11 -- scripts/common.sh@391 -- # pt= 00:08:41.521 16:01:11 -- scripts/common.sh@392 -- # return 1 00:08:41.521 16:01:11 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:08:41.521 16:01:11 -- setup/common.sh@76 -- # local dev=nvme1n1 00:08:41.521 16:01:11 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:08:41.521 16:01:11 -- setup/common.sh@80 -- # echo 5368709120 00:08:41.521 16:01:11 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:08:41.521 16:01:11 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:08:41.521 16:01:11 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:08:41.521 16:01:11 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:08:41.521 16:01:11 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:08:41.521 16:01:11 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:08:41.521 16:01:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:41.521 16:01:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:41.521 16:01:11 -- common/autotest_common.sh@10 -- # set +x 00:08:41.521 ************************************ 00:08:41.521 START TEST nvme_mount 00:08:41.521 ************************************ 00:08:41.521 16:01:11 -- common/autotest_common.sh@1111 -- # nvme_mount 00:08:41.521 16:01:11 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:08:41.521 16:01:11 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:08:41.521 16:01:11 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:41.521 16:01:11 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:08:41.521 16:01:11 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:08:41.521 16:01:11 -- setup/common.sh@39 -- # local disk=nvme0n1 00:08:41.521 16:01:11 -- setup/common.sh@40 -- # local part_no=1 00:08:41.521 16:01:11 -- setup/common.sh@41 -- # local size=1073741824 00:08:41.521 16:01:11 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:08:41.521 16:01:11 -- setup/common.sh@44 -- # parts=() 00:08:41.521 16:01:11 -- setup/common.sh@44 -- # local parts 00:08:41.521 16:01:11 -- setup/common.sh@46 -- # (( part = 1 )) 00:08:41.521 16:01:11 -- setup/common.sh@46 -- # (( part <= part_no )) 00:08:41.521 16:01:11 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:08:41.521 16:01:11 -- setup/common.sh@46 -- # (( part++ )) 00:08:41.521 16:01:11 -- setup/common.sh@46 -- # (( part <= part_no )) 00:08:41.521 16:01:11 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:08:41.521 16:01:11 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:08:41.521 16:01:11 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:08:42.895 Creating new GPT entries in memory. 00:08:42.895 GPT data structures destroyed! You may now partition the disk using fdisk or 00:08:42.895 other utilities. 00:08:42.895 16:01:12 -- setup/common.sh@57 -- # (( part = 1 )) 00:08:42.895 16:01:12 -- setup/common.sh@57 -- # (( part <= part_no )) 00:08:42.895 16:01:12 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:08:42.895 16:01:12 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:08:42.895 16:01:12 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:08:43.827 Creating new GPT entries in memory. 00:08:43.827 The operation has completed successfully. 00:08:43.827 16:01:13 -- setup/common.sh@57 -- # (( part++ )) 00:08:43.827 16:01:13 -- setup/common.sh@57 -- # (( part <= part_no )) 00:08:43.827 16:01:13 -- setup/common.sh@62 -- # wait 69047 00:08:43.827 16:01:13 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:43.827 16:01:13 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:08:43.827 16:01:13 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:43.827 16:01:13 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:08:43.827 16:01:13 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:08:43.827 16:01:13 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:43.827 16:01:13 -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:08:43.827 16:01:13 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:08:43.827 16:01:13 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:08:43.827 16:01:13 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:43.827 16:01:13 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:08:43.827 16:01:13 -- setup/devices.sh@53 -- # local found=0 00:08:43.827 16:01:13 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:08:43.827 16:01:13 -- setup/devices.sh@56 -- # : 00:08:43.827 16:01:13 -- setup/devices.sh@59 -- # local pci status 00:08:43.827 16:01:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:43.827 16:01:13 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:08:43.827 16:01:13 -- setup/devices.sh@47 -- # setup output config 00:08:43.827 16:01:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:43.827 16:01:13 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:08:44.084 16:01:13 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:44.084 16:01:13 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:08:44.084 16:01:13 -- setup/devices.sh@63 -- # found=1 00:08:44.084 16:01:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:44.084 16:01:13 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:44.084 16:01:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:44.084 16:01:13 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:44.084 16:01:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:44.343 16:01:14 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:44.343 16:01:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:44.343 16:01:14 -- setup/devices.sh@66 -- # (( found == 1 )) 00:08:44.343 16:01:14 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:08:44.343 16:01:14 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:44.343 16:01:14 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:08:44.343 16:01:14 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:08:44.343 16:01:14 -- setup/devices.sh@110 -- # cleanup_nvme 00:08:44.343 16:01:14 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:44.343 16:01:14 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:44.343 16:01:14 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:08:44.343 16:01:14 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:08:44.343 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:08:44.343 16:01:14 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:08:44.343 16:01:14 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:08:44.600 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:08:44.600 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:08:44.600 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:44.600 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:08:44.600 16:01:14 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:08:44.600 16:01:14 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:08:44.600 16:01:14 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:44.600 16:01:14 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:08:44.600 16:01:14 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:08:44.600 16:01:14 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:44.600 16:01:14 -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:08:44.600 16:01:14 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:08:44.600 16:01:14 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:08:44.600 16:01:14 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:44.600 16:01:14 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:08:44.600 16:01:14 -- setup/devices.sh@53 -- # local found=0 00:08:44.600 16:01:14 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:08:44.600 16:01:14 -- setup/devices.sh@56 -- # : 00:08:44.600 16:01:14 -- setup/devices.sh@59 -- # local pci status 00:08:44.600 16:01:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:44.600 16:01:14 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:08:44.600 16:01:14 -- setup/devices.sh@47 -- # setup output config 00:08:44.600 16:01:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:44.600 16:01:14 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:08:44.858 16:01:14 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:44.858 16:01:14 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:08:44.858 16:01:14 -- setup/devices.sh@63 -- # found=1 00:08:44.858 16:01:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:44.858 16:01:14 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:44.858 16:01:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:45.116 16:01:14 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:45.116 16:01:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:45.116 16:01:14 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:45.116 16:01:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:45.116 16:01:15 -- setup/devices.sh@66 -- # (( found == 1 )) 00:08:45.116 16:01:15 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:08:45.116 16:01:15 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:45.116 16:01:15 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:08:45.116 16:01:15 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:08:45.116 16:01:15 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:45.116 16:01:15 -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:08:45.116 16:01:15 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:08:45.116 16:01:15 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:08:45.116 16:01:15 -- setup/devices.sh@50 -- # local mount_point= 00:08:45.116 16:01:15 -- setup/devices.sh@51 -- # local test_file= 00:08:45.116 16:01:15 -- setup/devices.sh@53 -- # local found=0 00:08:45.116 16:01:15 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:08:45.116 16:01:15 -- setup/devices.sh@59 -- # local pci status 00:08:45.116 16:01:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:45.116 16:01:15 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:08:45.116 16:01:15 -- setup/devices.sh@47 -- # setup output config 00:08:45.116 16:01:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:45.116 16:01:15 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:08:45.374 16:01:15 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:45.374 16:01:15 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:08:45.374 16:01:15 -- setup/devices.sh@63 -- # found=1 00:08:45.374 16:01:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:45.374 16:01:15 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:45.374 16:01:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:45.632 16:01:15 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:45.632 16:01:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:45.632 16:01:15 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:45.632 16:01:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:45.889 16:01:15 -- setup/devices.sh@66 -- # (( found == 1 )) 00:08:45.889 16:01:15 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:08:45.889 16:01:15 -- setup/devices.sh@68 -- # return 0 00:08:45.889 16:01:15 -- setup/devices.sh@128 -- # cleanup_nvme 00:08:45.889 16:01:15 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:45.889 16:01:15 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:08:45.889 16:01:15 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:08:45.889 16:01:15 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:08:45.889 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:08:45.889 00:08:45.889 real 0m4.228s 00:08:45.889 user 0m0.733s 00:08:45.889 sys 0m1.190s 00:08:45.889 16:01:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:45.889 16:01:15 -- common/autotest_common.sh@10 -- # set +x 00:08:45.889 ************************************ 00:08:45.889 END TEST nvme_mount 00:08:45.889 ************************************ 00:08:45.889 16:01:15 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:08:45.889 16:01:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:45.889 16:01:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:45.889 16:01:15 -- common/autotest_common.sh@10 -- # set +x 00:08:45.889 ************************************ 00:08:45.889 START TEST dm_mount 00:08:45.889 ************************************ 00:08:45.889 16:01:15 -- common/autotest_common.sh@1111 -- # dm_mount 00:08:45.889 16:01:15 -- setup/devices.sh@144 -- # pv=nvme0n1 00:08:45.889 16:01:15 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:08:45.889 16:01:15 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:08:45.889 16:01:15 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:08:45.889 16:01:15 -- setup/common.sh@39 -- # local disk=nvme0n1 00:08:45.889 16:01:15 -- setup/common.sh@40 -- # local part_no=2 00:08:45.889 16:01:15 -- setup/common.sh@41 -- # local size=1073741824 00:08:45.889 16:01:15 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:08:45.889 16:01:15 -- setup/common.sh@44 -- # parts=() 00:08:45.889 16:01:15 -- setup/common.sh@44 -- # local parts 00:08:45.889 16:01:15 -- setup/common.sh@46 -- # (( part = 1 )) 00:08:45.889 16:01:15 -- setup/common.sh@46 -- # (( part <= part_no )) 00:08:45.889 16:01:15 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:08:45.889 16:01:15 -- setup/common.sh@46 -- # (( part++ )) 00:08:45.889 16:01:15 -- setup/common.sh@46 -- # (( part <= part_no )) 00:08:45.889 16:01:15 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:08:45.889 16:01:15 -- setup/common.sh@46 -- # (( part++ )) 00:08:45.889 16:01:15 -- setup/common.sh@46 -- # (( part <= part_no )) 00:08:45.889 16:01:15 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:08:45.889 16:01:15 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:08:45.889 16:01:15 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:08:46.875 Creating new GPT entries in memory. 00:08:46.875 GPT data structures destroyed! You may now partition the disk using fdisk or 00:08:46.875 other utilities. 00:08:46.875 16:01:16 -- setup/common.sh@57 -- # (( part = 1 )) 00:08:46.875 16:01:16 -- setup/common.sh@57 -- # (( part <= part_no )) 00:08:46.875 16:01:16 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:08:46.875 16:01:16 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:08:46.875 16:01:16 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:08:48.243 Creating new GPT entries in memory. 00:08:48.243 The operation has completed successfully. 00:08:48.243 16:01:17 -- setup/common.sh@57 -- # (( part++ )) 00:08:48.243 16:01:17 -- setup/common.sh@57 -- # (( part <= part_no )) 00:08:48.243 16:01:17 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:08:48.243 16:01:17 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:08:48.243 16:01:17 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:08:49.175 The operation has completed successfully. 00:08:49.175 16:01:18 -- setup/common.sh@57 -- # (( part++ )) 00:08:49.175 16:01:18 -- setup/common.sh@57 -- # (( part <= part_no )) 00:08:49.175 16:01:18 -- setup/common.sh@62 -- # wait 69511 00:08:49.175 16:01:18 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:08:49.175 16:01:18 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:49.175 16:01:18 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:08:49.175 16:01:18 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:08:49.175 16:01:18 -- setup/devices.sh@160 -- # for t in {1..5} 00:08:49.175 16:01:18 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:08:49.175 16:01:18 -- setup/devices.sh@161 -- # break 00:08:49.175 16:01:18 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:08:49.175 16:01:18 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:08:49.175 16:01:18 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:08:49.175 16:01:18 -- setup/devices.sh@166 -- # dm=dm-0 00:08:49.175 16:01:18 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:08:49.175 16:01:18 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:08:49.175 16:01:18 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:49.175 16:01:18 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:08:49.175 16:01:18 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:49.175 16:01:18 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:08:49.175 16:01:18 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:08:49.175 16:01:18 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:49.175 16:01:18 -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:08:49.175 16:01:18 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:08:49.175 16:01:18 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:08:49.175 16:01:18 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:49.175 16:01:18 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:08:49.175 16:01:18 -- setup/devices.sh@53 -- # local found=0 00:08:49.175 16:01:18 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:08:49.175 16:01:18 -- setup/devices.sh@56 -- # : 00:08:49.175 16:01:18 -- setup/devices.sh@59 -- # local pci status 00:08:49.175 16:01:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:49.175 16:01:18 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:08:49.175 16:01:18 -- setup/devices.sh@47 -- # setup output config 00:08:49.175 16:01:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:49.175 16:01:18 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:08:49.433 16:01:19 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:49.433 16:01:19 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:08:49.433 16:01:19 -- setup/devices.sh@63 -- # found=1 00:08:49.433 16:01:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:49.433 16:01:19 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:49.433 16:01:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:49.690 16:01:19 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:49.690 16:01:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:49.690 16:01:19 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:49.690 16:01:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:49.690 16:01:19 -- setup/devices.sh@66 -- # (( found == 1 )) 00:08:49.690 16:01:19 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:08:49.690 16:01:19 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:49.690 16:01:19 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:08:49.690 16:01:19 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:08:49.690 16:01:19 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:49.690 16:01:19 -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:08:49.690 16:01:19 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:08:49.690 16:01:19 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:08:49.690 16:01:19 -- setup/devices.sh@50 -- # local mount_point= 00:08:49.690 16:01:19 -- setup/devices.sh@51 -- # local test_file= 00:08:49.690 16:01:19 -- setup/devices.sh@53 -- # local found=0 00:08:49.690 16:01:19 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:08:49.690 16:01:19 -- setup/devices.sh@59 -- # local pci status 00:08:49.690 16:01:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:49.690 16:01:19 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:08:49.690 16:01:19 -- setup/devices.sh@47 -- # setup output config 00:08:49.690 16:01:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:08:49.690 16:01:19 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:08:49.948 16:01:19 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:49.948 16:01:19 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:08:49.948 16:01:19 -- setup/devices.sh@63 -- # found=1 00:08:49.948 16:01:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:49.948 16:01:19 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:49.948 16:01:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:50.205 16:01:20 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:50.205 16:01:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:50.205 16:01:20 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:50.205 16:01:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:50.463 16:01:20 -- setup/devices.sh@66 -- # (( found == 1 )) 00:08:50.463 16:01:20 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:08:50.463 16:01:20 -- setup/devices.sh@68 -- # return 0 00:08:50.463 16:01:20 -- setup/devices.sh@187 -- # cleanup_dm 00:08:50.463 16:01:20 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:50.463 16:01:20 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:08:50.463 16:01:20 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:08:50.463 16:01:20 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:08:50.463 16:01:20 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:08:50.463 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:08:50.463 16:01:20 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:08:50.463 16:01:20 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:08:50.463 00:08:50.463 real 0m4.478s 00:08:50.463 user 0m0.543s 00:08:50.463 sys 0m0.901s 00:08:50.463 16:01:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:50.463 16:01:20 -- common/autotest_common.sh@10 -- # set +x 00:08:50.463 ************************************ 00:08:50.463 END TEST dm_mount 00:08:50.463 ************************************ 00:08:50.463 16:01:20 -- setup/devices.sh@1 -- # cleanup 00:08:50.463 16:01:20 -- setup/devices.sh@11 -- # cleanup_nvme 00:08:50.463 16:01:20 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:50.463 16:01:20 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:08:50.463 16:01:20 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:08:50.463 16:01:20 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:08:50.463 16:01:20 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:08:50.721 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:08:50.721 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:08:50.721 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:50.721 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:08:50.721 16:01:20 -- setup/devices.sh@12 -- # cleanup_dm 00:08:50.721 16:01:20 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:50.721 16:01:20 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:08:50.721 16:01:20 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:08:50.721 16:01:20 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:08:50.721 16:01:20 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:08:50.721 16:01:20 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:08:50.721 ************************************ 00:08:50.721 END TEST devices 00:08:50.721 ************************************ 00:08:50.721 00:08:50.721 real 0m10.512s 00:08:50.721 user 0m2.036s 00:08:50.721 sys 0m2.852s 00:08:50.721 16:01:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:50.721 16:01:20 -- common/autotest_common.sh@10 -- # set +x 00:08:50.721 00:08:50.721 real 0m24.743s 00:08:50.721 user 0m7.984s 00:08:50.721 sys 0m11.191s 00:08:50.721 16:01:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:50.721 16:01:20 -- common/autotest_common.sh@10 -- # set +x 00:08:50.721 ************************************ 00:08:50.721 END TEST setup.sh 00:08:50.721 ************************************ 00:08:50.979 16:01:20 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:08:51.543 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:51.543 Hugepages 00:08:51.543 node hugesize free / total 00:08:51.543 node0 1048576kB 0 / 0 00:08:51.544 node0 2048kB 2048 / 2048 00:08:51.544 00:08:51.544 Type BDF Vendor Device NUMA Driver Device Block devices 00:08:51.801 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:08:51.801 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:08:52.058 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:08:52.058 16:01:21 -- spdk/autotest.sh@130 -- # uname -s 00:08:52.058 16:01:21 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:08:52.058 16:01:21 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:08:52.058 16:01:21 -- common/autotest_common.sh@1517 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:52.623 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:52.880 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:52.880 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:52.880 16:01:22 -- common/autotest_common.sh@1518 -- # sleep 1 00:08:54.268 16:01:23 -- common/autotest_common.sh@1519 -- # bdfs=() 00:08:54.268 16:01:23 -- common/autotest_common.sh@1519 -- # local bdfs 00:08:54.268 16:01:23 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:08:54.268 16:01:23 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:08:54.268 16:01:23 -- common/autotest_common.sh@1499 -- # bdfs=() 00:08:54.268 16:01:23 -- common/autotest_common.sh@1499 -- # local bdfs 00:08:54.268 16:01:23 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:54.268 16:01:23 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:54.268 16:01:23 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:08:54.268 16:01:23 -- common/autotest_common.sh@1501 -- # (( 2 == 0 )) 00:08:54.268 16:01:23 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:08:54.268 16:01:23 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:54.268 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:54.525 Waiting for block devices as requested 00:08:54.525 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:54.525 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:54.783 16:01:24 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:08:54.783 16:01:24 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:08:54.783 16:01:24 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:08:54.783 16:01:24 -- common/autotest_common.sh@1488 -- # grep 0000:00:10.0/nvme/nvme 00:08:54.783 16:01:24 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:08:54.783 16:01:24 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:08:54.783 16:01:24 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:08:54.783 16:01:24 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme1 00:08:54.783 16:01:24 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:08:54.783 16:01:24 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:08:54.783 16:01:24 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:08:54.783 16:01:24 -- common/autotest_common.sh@1531 -- # grep oacs 00:08:54.783 16:01:24 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:08:54.783 16:01:24 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:08:54.783 16:01:24 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:08:54.783 16:01:24 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:08:54.783 16:01:24 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:08:54.783 16:01:24 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:08:54.783 16:01:24 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:08:54.783 16:01:24 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:08:54.783 16:01:24 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:08:54.783 16:01:24 -- common/autotest_common.sh@1543 -- # continue 00:08:54.783 16:01:24 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:08:54.783 16:01:24 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:08:54.783 16:01:24 -- common/autotest_common.sh@1488 -- # grep 0000:00:11.0/nvme/nvme 00:08:54.783 16:01:24 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:08:54.783 16:01:24 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:08:54.783 16:01:24 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:08:54.783 16:01:24 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:08:54.783 16:01:24 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:08:54.783 16:01:24 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:08:54.783 16:01:24 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:08:54.783 16:01:24 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:08:54.783 16:01:24 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:08:54.783 16:01:24 -- common/autotest_common.sh@1531 -- # grep oacs 00:08:54.783 16:01:24 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:08:54.783 16:01:24 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:08:54.783 16:01:24 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:08:54.783 16:01:24 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:08:54.783 16:01:24 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:08:54.783 16:01:24 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:08:54.783 16:01:24 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:08:54.783 16:01:24 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:08:54.783 16:01:24 -- common/autotest_common.sh@1543 -- # continue 00:08:54.783 16:01:24 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:08:54.783 16:01:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:54.783 16:01:24 -- common/autotest_common.sh@10 -- # set +x 00:08:54.783 16:01:24 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:08:54.783 16:01:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:54.783 16:01:24 -- common/autotest_common.sh@10 -- # set +x 00:08:54.783 16:01:24 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:55.713 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:55.713 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:55.713 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:55.713 16:01:25 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:08:55.713 16:01:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:55.713 16:01:25 -- common/autotest_common.sh@10 -- # set +x 00:08:55.713 16:01:25 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:08:55.713 16:01:25 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:08:55.975 16:01:25 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:08:55.975 16:01:25 -- common/autotest_common.sh@1563 -- # bdfs=() 00:08:55.975 16:01:25 -- common/autotest_common.sh@1563 -- # local bdfs 00:08:55.975 16:01:25 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:08:55.975 16:01:25 -- common/autotest_common.sh@1499 -- # bdfs=() 00:08:55.975 16:01:25 -- common/autotest_common.sh@1499 -- # local bdfs 00:08:55.975 16:01:25 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:55.975 16:01:25 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:55.975 16:01:25 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:08:55.975 16:01:25 -- common/autotest_common.sh@1501 -- # (( 2 == 0 )) 00:08:55.975 16:01:25 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:08:55.975 16:01:25 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:08:55.975 16:01:25 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:08:55.975 16:01:25 -- common/autotest_common.sh@1566 -- # device=0x0010 00:08:55.975 16:01:25 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:08:55.975 16:01:25 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:08:55.975 16:01:25 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:08:55.975 16:01:25 -- common/autotest_common.sh@1566 -- # device=0x0010 00:08:55.975 16:01:25 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:08:55.975 16:01:25 -- common/autotest_common.sh@1572 -- # printf '%s\n' 00:08:55.975 16:01:25 -- common/autotest_common.sh@1578 -- # [[ -z '' ]] 00:08:55.975 16:01:25 -- common/autotest_common.sh@1579 -- # return 0 00:08:55.975 16:01:25 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:08:55.975 16:01:25 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:08:55.975 16:01:25 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:08:55.975 16:01:25 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:08:55.975 16:01:25 -- spdk/autotest.sh@162 -- # timing_enter lib 00:08:55.975 16:01:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:55.975 16:01:25 -- common/autotest_common.sh@10 -- # set +x 00:08:55.975 16:01:25 -- spdk/autotest.sh@164 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:55.975 16:01:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:55.975 16:01:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:55.975 16:01:25 -- common/autotest_common.sh@10 -- # set +x 00:08:55.975 ************************************ 00:08:55.975 START TEST env 00:08:55.975 ************************************ 00:08:55.975 16:01:25 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:55.975 * Looking for test storage... 00:08:56.232 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:08:56.232 16:01:25 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:56.232 16:01:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:56.232 16:01:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:56.232 16:01:25 -- common/autotest_common.sh@10 -- # set +x 00:08:56.232 ************************************ 00:08:56.232 START TEST env_memory 00:08:56.232 ************************************ 00:08:56.232 16:01:26 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:56.232 00:08:56.232 00:08:56.232 CUnit - A unit testing framework for C - Version 2.1-3 00:08:56.232 http://cunit.sourceforge.net/ 00:08:56.232 00:08:56.232 00:08:56.232 Suite: memory 00:08:56.232 Test: alloc and free memory map ...[2024-04-15 16:01:26.078773] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:08:56.232 passed 00:08:56.232 Test: mem map translation ...[2024-04-15 16:01:26.105116] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:08:56.232 [2024-04-15 16:01:26.105164] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:08:56.232 [2024-04-15 16:01:26.105209] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:08:56.232 [2024-04-15 16:01:26.105220] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:08:56.232 passed 00:08:56.232 Test: mem map registration ...[2024-04-15 16:01:26.153179] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:08:56.232 [2024-04-15 16:01:26.153241] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:08:56.232 passed 00:08:56.490 Test: mem map adjacent registrations ...passed 00:08:56.490 00:08:56.490 Run Summary: Type Total Ran Passed Failed Inactive 00:08:56.490 suites 1 1 n/a 0 0 00:08:56.490 tests 4 4 4 0 0 00:08:56.490 asserts 152 152 152 0 n/a 00:08:56.490 00:08:56.490 Elapsed time = 0.167 seconds 00:08:56.490 00:08:56.490 real 0m0.181s 00:08:56.490 user 0m0.165s 00:08:56.490 sys 0m0.012s 00:08:56.490 16:01:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:56.490 16:01:26 -- common/autotest_common.sh@10 -- # set +x 00:08:56.490 ************************************ 00:08:56.490 END TEST env_memory 00:08:56.490 ************************************ 00:08:56.490 16:01:26 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:56.490 16:01:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:56.490 16:01:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:56.490 16:01:26 -- common/autotest_common.sh@10 -- # set +x 00:08:56.490 ************************************ 00:08:56.490 START TEST env_vtophys 00:08:56.490 ************************************ 00:08:56.490 16:01:26 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:56.490 EAL: lib.eal log level changed from notice to debug 00:08:56.490 EAL: Detected lcore 0 as core 0 on socket 0 00:08:56.490 EAL: Detected lcore 1 as core 0 on socket 0 00:08:56.490 EAL: Detected lcore 2 as core 0 on socket 0 00:08:56.490 EAL: Detected lcore 3 as core 0 on socket 0 00:08:56.490 EAL: Detected lcore 4 as core 0 on socket 0 00:08:56.491 EAL: Detected lcore 5 as core 0 on socket 0 00:08:56.491 EAL: Detected lcore 6 as core 0 on socket 0 00:08:56.491 EAL: Detected lcore 7 as core 0 on socket 0 00:08:56.491 EAL: Detected lcore 8 as core 0 on socket 0 00:08:56.491 EAL: Detected lcore 9 as core 0 on socket 0 00:08:56.491 EAL: Maximum logical cores by configuration: 128 00:08:56.491 EAL: Detected CPU lcores: 10 00:08:56.491 EAL: Detected NUMA nodes: 1 00:08:56.491 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:08:56.491 EAL: Detected shared linkage of DPDK 00:08:56.491 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:08:56.491 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:08:56.491 EAL: Registered [vdev] bus. 00:08:56.491 EAL: bus.vdev log level changed from disabled to notice 00:08:56.491 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:08:56.491 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:08:56.491 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:08:56.491 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:08:56.491 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:08:56.491 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:08:56.491 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:08:56.491 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:08:56.491 EAL: No shared files mode enabled, IPC will be disabled 00:08:56.491 EAL: No shared files mode enabled, IPC is disabled 00:08:56.491 EAL: Selected IOVA mode 'PA' 00:08:56.491 EAL: Probing VFIO support... 00:08:56.491 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:08:56.491 EAL: VFIO modules not loaded, skipping VFIO support... 00:08:56.491 EAL: Ask a virtual area of 0x2e000 bytes 00:08:56.491 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:08:56.491 EAL: Setting up physically contiguous memory... 00:08:56.491 EAL: Setting maximum number of open files to 524288 00:08:56.491 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:08:56.491 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:08:56.491 EAL: Ask a virtual area of 0x61000 bytes 00:08:56.491 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:08:56.491 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:56.491 EAL: Ask a virtual area of 0x400000000 bytes 00:08:56.491 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:08:56.491 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:08:56.491 EAL: Ask a virtual area of 0x61000 bytes 00:08:56.491 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:08:56.491 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:56.491 EAL: Ask a virtual area of 0x400000000 bytes 00:08:56.491 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:08:56.491 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:08:56.491 EAL: Ask a virtual area of 0x61000 bytes 00:08:56.491 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:08:56.491 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:56.491 EAL: Ask a virtual area of 0x400000000 bytes 00:08:56.491 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:08:56.491 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:08:56.491 EAL: Ask a virtual area of 0x61000 bytes 00:08:56.491 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:08:56.491 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:56.491 EAL: Ask a virtual area of 0x400000000 bytes 00:08:56.491 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:08:56.491 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:08:56.491 EAL: Hugepages will be freed exactly as allocated. 00:08:56.491 EAL: No shared files mode enabled, IPC is disabled 00:08:56.491 EAL: No shared files mode enabled, IPC is disabled 00:08:56.749 EAL: TSC frequency is ~2100000 KHz 00:08:56.749 EAL: Main lcore 0 is ready (tid=7f96326d4a00;cpuset=[0]) 00:08:56.749 EAL: Trying to obtain current memory policy. 00:08:56.749 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:56.749 EAL: Restoring previous memory policy: 0 00:08:56.749 EAL: request: mp_malloc_sync 00:08:56.749 EAL: No shared files mode enabled, IPC is disabled 00:08:56.749 EAL: Heap on socket 0 was expanded by 2MB 00:08:56.749 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:08:56.749 EAL: No shared files mode enabled, IPC is disabled 00:08:56.749 EAL: No PCI address specified using 'addr=' in: bus=pci 00:08:56.749 EAL: Mem event callback 'spdk:(nil)' registered 00:08:56.749 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:08:56.749 00:08:56.749 00:08:56.749 CUnit - A unit testing framework for C - Version 2.1-3 00:08:56.749 http://cunit.sourceforge.net/ 00:08:56.749 00:08:56.749 00:08:56.749 Suite: components_suite 00:08:56.749 Test: vtophys_malloc_test ...passed 00:08:56.749 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:08:56.749 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:56.749 EAL: Restoring previous memory policy: 4 00:08:56.749 EAL: Calling mem event callback 'spdk:(nil)' 00:08:56.749 EAL: request: mp_malloc_sync 00:08:56.749 EAL: No shared files mode enabled, IPC is disabled 00:08:56.749 EAL: Heap on socket 0 was expanded by 4MB 00:08:56.749 EAL: Calling mem event callback 'spdk:(nil)' 00:08:56.749 EAL: request: mp_malloc_sync 00:08:56.749 EAL: No shared files mode enabled, IPC is disabled 00:08:56.749 EAL: Heap on socket 0 was shrunk by 4MB 00:08:56.749 EAL: Trying to obtain current memory policy. 00:08:56.749 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:56.749 EAL: Restoring previous memory policy: 4 00:08:56.749 EAL: Calling mem event callback 'spdk:(nil)' 00:08:56.749 EAL: request: mp_malloc_sync 00:08:56.749 EAL: No shared files mode enabled, IPC is disabled 00:08:56.749 EAL: Heap on socket 0 was expanded by 6MB 00:08:56.749 EAL: Calling mem event callback 'spdk:(nil)' 00:08:56.749 EAL: request: mp_malloc_sync 00:08:56.749 EAL: No shared files mode enabled, IPC is disabled 00:08:56.749 EAL: Heap on socket 0 was shrunk by 6MB 00:08:56.749 EAL: Trying to obtain current memory policy. 00:08:56.749 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:56.749 EAL: Restoring previous memory policy: 4 00:08:56.749 EAL: Calling mem event callback 'spdk:(nil)' 00:08:56.749 EAL: request: mp_malloc_sync 00:08:56.749 EAL: No shared files mode enabled, IPC is disabled 00:08:56.749 EAL: Heap on socket 0 was expanded by 10MB 00:08:56.749 EAL: Calling mem event callback 'spdk:(nil)' 00:08:56.749 EAL: request: mp_malloc_sync 00:08:56.749 EAL: No shared files mode enabled, IPC is disabled 00:08:56.749 EAL: Heap on socket 0 was shrunk by 10MB 00:08:56.749 EAL: Trying to obtain current memory policy. 00:08:56.749 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:56.749 EAL: Restoring previous memory policy: 4 00:08:56.749 EAL: Calling mem event callback 'spdk:(nil)' 00:08:56.749 EAL: request: mp_malloc_sync 00:08:56.749 EAL: No shared files mode enabled, IPC is disabled 00:08:56.749 EAL: Heap on socket 0 was expanded by 18MB 00:08:56.749 EAL: Calling mem event callback 'spdk:(nil)' 00:08:56.749 EAL: request: mp_malloc_sync 00:08:56.749 EAL: No shared files mode enabled, IPC is disabled 00:08:56.749 EAL: Heap on socket 0 was shrunk by 18MB 00:08:56.749 EAL: Trying to obtain current memory policy. 00:08:56.749 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:56.749 EAL: Restoring previous memory policy: 4 00:08:56.749 EAL: Calling mem event callback 'spdk:(nil)' 00:08:56.749 EAL: request: mp_malloc_sync 00:08:56.749 EAL: No shared files mode enabled, IPC is disabled 00:08:56.749 EAL: Heap on socket 0 was expanded by 34MB 00:08:56.749 EAL: Calling mem event callback 'spdk:(nil)' 00:08:56.749 EAL: request: mp_malloc_sync 00:08:56.749 EAL: No shared files mode enabled, IPC is disabled 00:08:56.749 EAL: Heap on socket 0 was shrunk by 34MB 00:08:56.749 EAL: Trying to obtain current memory policy. 00:08:56.749 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:56.749 EAL: Restoring previous memory policy: 4 00:08:56.749 EAL: Calling mem event callback 'spdk:(nil)' 00:08:56.749 EAL: request: mp_malloc_sync 00:08:56.749 EAL: No shared files mode enabled, IPC is disabled 00:08:56.749 EAL: Heap on socket 0 was expanded by 66MB 00:08:56.749 EAL: Calling mem event callback 'spdk:(nil)' 00:08:56.749 EAL: request: mp_malloc_sync 00:08:56.749 EAL: No shared files mode enabled, IPC is disabled 00:08:56.749 EAL: Heap on socket 0 was shrunk by 66MB 00:08:56.749 EAL: Trying to obtain current memory policy. 00:08:56.749 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:56.749 EAL: Restoring previous memory policy: 4 00:08:56.749 EAL: Calling mem event callback 'spdk:(nil)' 00:08:56.749 EAL: request: mp_malloc_sync 00:08:56.749 EAL: No shared files mode enabled, IPC is disabled 00:08:56.749 EAL: Heap on socket 0 was expanded by 130MB 00:08:56.749 EAL: Calling mem event callback 'spdk:(nil)' 00:08:56.749 EAL: request: mp_malloc_sync 00:08:56.749 EAL: No shared files mode enabled, IPC is disabled 00:08:56.749 EAL: Heap on socket 0 was shrunk by 130MB 00:08:56.749 EAL: Trying to obtain current memory policy. 00:08:56.749 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:56.750 EAL: Restoring previous memory policy: 4 00:08:56.750 EAL: Calling mem event callback 'spdk:(nil)' 00:08:56.750 EAL: request: mp_malloc_sync 00:08:56.750 EAL: No shared files mode enabled, IPC is disabled 00:08:56.750 EAL: Heap on socket 0 was expanded by 258MB 00:08:57.007 EAL: Calling mem event callback 'spdk:(nil)' 00:08:57.007 EAL: request: mp_malloc_sync 00:08:57.007 EAL: No shared files mode enabled, IPC is disabled 00:08:57.007 EAL: Heap on socket 0 was shrunk by 258MB 00:08:57.007 EAL: Trying to obtain current memory policy. 00:08:57.007 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:57.007 EAL: Restoring previous memory policy: 4 00:08:57.007 EAL: Calling mem event callback 'spdk:(nil)' 00:08:57.007 EAL: request: mp_malloc_sync 00:08:57.007 EAL: No shared files mode enabled, IPC is disabled 00:08:57.007 EAL: Heap on socket 0 was expanded by 514MB 00:08:57.265 EAL: Calling mem event callback 'spdk:(nil)' 00:08:57.265 EAL: request: mp_malloc_sync 00:08:57.265 EAL: No shared files mode enabled, IPC is disabled 00:08:57.265 EAL: Heap on socket 0 was shrunk by 514MB 00:08:57.265 EAL: Trying to obtain current memory policy. 00:08:57.265 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:57.522 EAL: Restoring previous memory policy: 4 00:08:57.522 EAL: Calling mem event callback 'spdk:(nil)' 00:08:57.522 EAL: request: mp_malloc_sync 00:08:57.522 EAL: No shared files mode enabled, IPC is disabled 00:08:57.522 EAL: Heap on socket 0 was expanded by 1026MB 00:08:57.522 EAL: Calling mem event callback 'spdk:(nil)' 00:08:57.779 EAL: request: mp_malloc_sync 00:08:57.779 EAL: No shared files mode enabled, IPC is disabled 00:08:57.779 passed 00:08:57.779 00:08:57.779 EAL: Heap on socket 0 was shrunk by 1026MB 00:08:57.779 Run Summary: Type Total Ran Passed Failed Inactive 00:08:57.779 suites 1 1 n/a 0 0 00:08:57.779 tests 2 2 2 0 0 00:08:57.779 asserts 6457 6457 6457 0 n/a 00:08:57.779 00:08:57.779 Elapsed time = 1.048 seconds 00:08:57.779 EAL: Calling mem event callback 'spdk:(nil)' 00:08:57.779 EAL: request: mp_malloc_sync 00:08:57.779 EAL: No shared files mode enabled, IPC is disabled 00:08:57.779 EAL: Heap on socket 0 was shrunk by 2MB 00:08:57.779 EAL: No shared files mode enabled, IPC is disabled 00:08:57.779 EAL: No shared files mode enabled, IPC is disabled 00:08:57.779 EAL: No shared files mode enabled, IPC is disabled 00:08:57.779 00:08:57.779 real 0m1.264s 00:08:57.779 user 0m0.661s 00:08:57.779 sys 0m0.453s 00:08:57.779 ************************************ 00:08:57.779 END TEST env_vtophys 00:08:57.779 ************************************ 00:08:57.779 16:01:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:57.779 16:01:27 -- common/autotest_common.sh@10 -- # set +x 00:08:57.779 16:01:27 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:08:57.779 16:01:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:57.779 16:01:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:57.779 16:01:27 -- common/autotest_common.sh@10 -- # set +x 00:08:58.035 ************************************ 00:08:58.035 START TEST env_pci 00:08:58.035 ************************************ 00:08:58.035 16:01:27 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:08:58.035 00:08:58.035 00:08:58.035 CUnit - A unit testing framework for C - Version 2.1-3 00:08:58.035 http://cunit.sourceforge.net/ 00:08:58.035 00:08:58.035 00:08:58.035 Suite: pci 00:08:58.035 Test: pci_hook ...[2024-04-15 16:01:27.767775] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 70785 has claimed it 00:08:58.035 EAL: Cannot find device (10000:00:01.0) 00:08:58.035 EAL: Failed to attach device on primary process 00:08:58.035 passed 00:08:58.035 00:08:58.035 Run Summary: Type Total Ran Passed Failed Inactive 00:08:58.035 suites 1 1 n/a 0 0 00:08:58.035 tests 1 1 1 0 0 00:08:58.035 asserts 25 25 25 0 n/a 00:08:58.035 00:08:58.035 Elapsed time = 0.003 seconds 00:08:58.035 00:08:58.035 real 0m0.020s 00:08:58.035 user 0m0.009s 00:08:58.035 sys 0m0.010s 00:08:58.035 16:01:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:58.035 16:01:27 -- common/autotest_common.sh@10 -- # set +x 00:08:58.035 ************************************ 00:08:58.035 END TEST env_pci 00:08:58.035 ************************************ 00:08:58.035 16:01:27 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:08:58.035 16:01:27 -- env/env.sh@15 -- # uname 00:08:58.035 16:01:27 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:08:58.035 16:01:27 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:08:58.035 16:01:27 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:58.035 16:01:27 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:08:58.035 16:01:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:58.035 16:01:27 -- common/autotest_common.sh@10 -- # set +x 00:08:58.035 ************************************ 00:08:58.035 START TEST env_dpdk_post_init 00:08:58.035 ************************************ 00:08:58.035 16:01:27 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:58.035 EAL: Detected CPU lcores: 10 00:08:58.035 EAL: Detected NUMA nodes: 1 00:08:58.035 EAL: Detected shared linkage of DPDK 00:08:58.035 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:58.035 EAL: Selected IOVA mode 'PA' 00:08:58.292 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:58.292 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:08:58.292 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:08:58.292 Starting DPDK initialization... 00:08:58.292 Starting SPDK post initialization... 00:08:58.292 SPDK NVMe probe 00:08:58.292 Attaching to 0000:00:10.0 00:08:58.292 Attaching to 0000:00:11.0 00:08:58.292 Attached to 0000:00:10.0 00:08:58.292 Attached to 0000:00:11.0 00:08:58.292 Cleaning up... 00:08:58.292 00:08:58.292 real 0m0.181s 00:08:58.292 user 0m0.039s 00:08:58.292 sys 0m0.039s 00:08:58.292 16:01:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:58.292 16:01:28 -- common/autotest_common.sh@10 -- # set +x 00:08:58.292 ************************************ 00:08:58.292 END TEST env_dpdk_post_init 00:08:58.292 ************************************ 00:08:58.292 16:01:28 -- env/env.sh@26 -- # uname 00:08:58.292 16:01:28 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:08:58.292 16:01:28 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:08:58.292 16:01:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:58.292 16:01:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:58.292 16:01:28 -- common/autotest_common.sh@10 -- # set +x 00:08:58.292 ************************************ 00:08:58.292 START TEST env_mem_callbacks 00:08:58.292 ************************************ 00:08:58.292 16:01:28 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:08:58.292 EAL: Detected CPU lcores: 10 00:08:58.292 EAL: Detected NUMA nodes: 1 00:08:58.292 EAL: Detected shared linkage of DPDK 00:08:58.292 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:58.292 EAL: Selected IOVA mode 'PA' 00:08:58.550 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:58.550 00:08:58.550 00:08:58.550 CUnit - A unit testing framework for C - Version 2.1-3 00:08:58.550 http://cunit.sourceforge.net/ 00:08:58.550 00:08:58.550 00:08:58.550 Suite: memory 00:08:58.550 Test: test ... 00:08:58.550 register 0x200000200000 2097152 00:08:58.550 malloc 3145728 00:08:58.550 register 0x200000400000 4194304 00:08:58.550 buf 0x200000500000 len 3145728 PASSED 00:08:58.550 malloc 64 00:08:58.550 buf 0x2000004fff40 len 64 PASSED 00:08:58.550 malloc 4194304 00:08:58.550 register 0x200000800000 6291456 00:08:58.550 buf 0x200000a00000 len 4194304 PASSED 00:08:58.550 free 0x200000500000 3145728 00:08:58.550 free 0x2000004fff40 64 00:08:58.550 unregister 0x200000400000 4194304 PASSED 00:08:58.550 free 0x200000a00000 4194304 00:08:58.550 unregister 0x200000800000 6291456 PASSED 00:08:58.550 malloc 8388608 00:08:58.550 register 0x200000400000 10485760 00:08:58.550 buf 0x200000600000 len 8388608 PASSED 00:08:58.550 free 0x200000600000 8388608 00:08:58.550 unregister 0x200000400000 10485760 PASSED 00:08:58.550 passed 00:08:58.550 00:08:58.550 Run Summary: Type Total Ran Passed Failed Inactive 00:08:58.550 suites 1 1 n/a 0 0 00:08:58.550 tests 1 1 1 0 0 00:08:58.550 asserts 15 15 15 0 n/a 00:08:58.550 00:08:58.550 Elapsed time = 0.009 seconds 00:08:58.550 00:08:58.550 real 0m0.149s 00:08:58.550 user 0m0.017s 00:08:58.550 sys 0m0.025s 00:08:58.550 16:01:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:58.550 16:01:28 -- common/autotest_common.sh@10 -- # set +x 00:08:58.550 ************************************ 00:08:58.550 END TEST env_mem_callbacks 00:08:58.550 ************************************ 00:08:58.550 ************************************ 00:08:58.550 END TEST env 00:08:58.550 ************************************ 00:08:58.550 00:08:58.550 real 0m2.553s 00:08:58.550 user 0m1.150s 00:08:58.550 sys 0m0.953s 00:08:58.550 16:01:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:58.550 16:01:28 -- common/autotest_common.sh@10 -- # set +x 00:08:58.550 16:01:28 -- spdk/autotest.sh@165 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:08:58.550 16:01:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:58.550 16:01:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:58.550 16:01:28 -- common/autotest_common.sh@10 -- # set +x 00:08:58.807 ************************************ 00:08:58.807 START TEST rpc 00:08:58.807 ************************************ 00:08:58.808 16:01:28 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:08:58.808 * Looking for test storage... 00:08:58.808 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:08:58.808 16:01:28 -- rpc/rpc.sh@65 -- # spdk_pid=70914 00:08:58.808 16:01:28 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:08:58.808 16:01:28 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:58.808 16:01:28 -- rpc/rpc.sh@67 -- # waitforlisten 70914 00:08:58.808 16:01:28 -- common/autotest_common.sh@817 -- # '[' -z 70914 ']' 00:08:58.808 16:01:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.808 16:01:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:58.808 16:01:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:58.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:58.808 16:01:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:58.808 16:01:28 -- common/autotest_common.sh@10 -- # set +x 00:08:58.808 [2024-04-15 16:01:28.710477] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:08:58.808 [2024-04-15 16:01:28.710799] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70914 ] 00:08:59.066 [2024-04-15 16:01:28.856210] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.066 [2024-04-15 16:01:28.931856] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:08:59.066 [2024-04-15 16:01:28.932165] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 70914' to capture a snapshot of events at runtime. 00:08:59.066 [2024-04-15 16:01:28.932357] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:59.066 [2024-04-15 16:01:28.932505] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:59.066 [2024-04-15 16:01:28.932550] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid70914 for offline analysis/debug. 00:08:59.066 [2024-04-15 16:01:28.932828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.998 16:01:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:59.998 16:01:29 -- common/autotest_common.sh@850 -- # return 0 00:08:59.998 16:01:29 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:59.998 16:01:29 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:59.998 16:01:29 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:08:59.998 16:01:29 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:08:59.998 16:01:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:59.998 16:01:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:59.998 16:01:29 -- common/autotest_common.sh@10 -- # set +x 00:08:59.998 ************************************ 00:08:59.998 START TEST rpc_integrity 00:08:59.998 ************************************ 00:08:59.998 16:01:29 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:08:59.998 16:01:29 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:59.998 16:01:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:59.998 16:01:29 -- common/autotest_common.sh@10 -- # set +x 00:08:59.998 16:01:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:59.998 16:01:29 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:59.998 16:01:29 -- rpc/rpc.sh@13 -- # jq length 00:09:00.256 16:01:29 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:00.256 16:01:29 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:00.256 16:01:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:00.256 16:01:29 -- common/autotest_common.sh@10 -- # set +x 00:09:00.256 16:01:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:00.256 16:01:30 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:09:00.256 16:01:30 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:00.256 16:01:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:00.256 16:01:30 -- common/autotest_common.sh@10 -- # set +x 00:09:00.256 16:01:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:00.256 16:01:30 -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:00.256 { 00:09:00.256 "name": "Malloc0", 00:09:00.256 "aliases": [ 00:09:00.256 "478f9ed3-5e64-4b64-b2d9-ad3193a53e33" 00:09:00.256 ], 00:09:00.256 "product_name": "Malloc disk", 00:09:00.256 "block_size": 512, 00:09:00.256 "num_blocks": 16384, 00:09:00.256 "uuid": "478f9ed3-5e64-4b64-b2d9-ad3193a53e33", 00:09:00.256 "assigned_rate_limits": { 00:09:00.256 "rw_ios_per_sec": 0, 00:09:00.256 "rw_mbytes_per_sec": 0, 00:09:00.256 "r_mbytes_per_sec": 0, 00:09:00.256 "w_mbytes_per_sec": 0 00:09:00.256 }, 00:09:00.256 "claimed": false, 00:09:00.256 "zoned": false, 00:09:00.256 "supported_io_types": { 00:09:00.256 "read": true, 00:09:00.256 "write": true, 00:09:00.256 "unmap": true, 00:09:00.256 "write_zeroes": true, 00:09:00.256 "flush": true, 00:09:00.256 "reset": true, 00:09:00.256 "compare": false, 00:09:00.256 "compare_and_write": false, 00:09:00.256 "abort": true, 00:09:00.256 "nvme_admin": false, 00:09:00.256 "nvme_io": false 00:09:00.256 }, 00:09:00.256 "memory_domains": [ 00:09:00.256 { 00:09:00.256 "dma_device_id": "system", 00:09:00.256 "dma_device_type": 1 00:09:00.256 }, 00:09:00.256 { 00:09:00.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.256 "dma_device_type": 2 00:09:00.256 } 00:09:00.256 ], 00:09:00.256 "driver_specific": {} 00:09:00.256 } 00:09:00.256 ]' 00:09:00.256 16:01:30 -- rpc/rpc.sh@17 -- # jq length 00:09:00.256 16:01:30 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:00.256 16:01:30 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:09:00.256 16:01:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:00.256 16:01:30 -- common/autotest_common.sh@10 -- # set +x 00:09:00.256 [2024-04-15 16:01:30.082042] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:09:00.256 [2024-04-15 16:01:30.082238] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:00.256 [2024-04-15 16:01:30.082300] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc673f0 00:09:00.256 [2024-04-15 16:01:30.082375] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:00.256 [2024-04-15 16:01:30.084038] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:00.256 [2024-04-15 16:01:30.084190] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:00.256 Passthru0 00:09:00.256 16:01:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:00.256 16:01:30 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:00.256 16:01:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:00.256 16:01:30 -- common/autotest_common.sh@10 -- # set +x 00:09:00.256 16:01:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:00.256 16:01:30 -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:00.256 { 00:09:00.256 "name": "Malloc0", 00:09:00.256 "aliases": [ 00:09:00.256 "478f9ed3-5e64-4b64-b2d9-ad3193a53e33" 00:09:00.256 ], 00:09:00.256 "product_name": "Malloc disk", 00:09:00.256 "block_size": 512, 00:09:00.256 "num_blocks": 16384, 00:09:00.256 "uuid": "478f9ed3-5e64-4b64-b2d9-ad3193a53e33", 00:09:00.256 "assigned_rate_limits": { 00:09:00.256 "rw_ios_per_sec": 0, 00:09:00.256 "rw_mbytes_per_sec": 0, 00:09:00.256 "r_mbytes_per_sec": 0, 00:09:00.256 "w_mbytes_per_sec": 0 00:09:00.256 }, 00:09:00.256 "claimed": true, 00:09:00.256 "claim_type": "exclusive_write", 00:09:00.256 "zoned": false, 00:09:00.256 "supported_io_types": { 00:09:00.256 "read": true, 00:09:00.256 "write": true, 00:09:00.256 "unmap": true, 00:09:00.256 "write_zeroes": true, 00:09:00.256 "flush": true, 00:09:00.256 "reset": true, 00:09:00.256 "compare": false, 00:09:00.256 "compare_and_write": false, 00:09:00.256 "abort": true, 00:09:00.256 "nvme_admin": false, 00:09:00.256 "nvme_io": false 00:09:00.256 }, 00:09:00.256 "memory_domains": [ 00:09:00.256 { 00:09:00.256 "dma_device_id": "system", 00:09:00.256 "dma_device_type": 1 00:09:00.256 }, 00:09:00.256 { 00:09:00.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.256 "dma_device_type": 2 00:09:00.256 } 00:09:00.256 ], 00:09:00.256 "driver_specific": {} 00:09:00.256 }, 00:09:00.256 { 00:09:00.256 "name": "Passthru0", 00:09:00.256 "aliases": [ 00:09:00.256 "6dd7e44c-51ee-5721-b3ba-56da33ba1afc" 00:09:00.256 ], 00:09:00.256 "product_name": "passthru", 00:09:00.256 "block_size": 512, 00:09:00.256 "num_blocks": 16384, 00:09:00.256 "uuid": "6dd7e44c-51ee-5721-b3ba-56da33ba1afc", 00:09:00.256 "assigned_rate_limits": { 00:09:00.256 "rw_ios_per_sec": 0, 00:09:00.256 "rw_mbytes_per_sec": 0, 00:09:00.256 "r_mbytes_per_sec": 0, 00:09:00.256 "w_mbytes_per_sec": 0 00:09:00.256 }, 00:09:00.256 "claimed": false, 00:09:00.256 "zoned": false, 00:09:00.256 "supported_io_types": { 00:09:00.256 "read": true, 00:09:00.256 "write": true, 00:09:00.256 "unmap": true, 00:09:00.256 "write_zeroes": true, 00:09:00.256 "flush": true, 00:09:00.256 "reset": true, 00:09:00.256 "compare": false, 00:09:00.256 "compare_and_write": false, 00:09:00.256 "abort": true, 00:09:00.256 "nvme_admin": false, 00:09:00.256 "nvme_io": false 00:09:00.256 }, 00:09:00.256 "memory_domains": [ 00:09:00.256 { 00:09:00.256 "dma_device_id": "system", 00:09:00.256 "dma_device_type": 1 00:09:00.256 }, 00:09:00.256 { 00:09:00.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.256 "dma_device_type": 2 00:09:00.256 } 00:09:00.256 ], 00:09:00.256 "driver_specific": { 00:09:00.256 "passthru": { 00:09:00.256 "name": "Passthru0", 00:09:00.256 "base_bdev_name": "Malloc0" 00:09:00.256 } 00:09:00.256 } 00:09:00.256 } 00:09:00.256 ]' 00:09:00.256 16:01:30 -- rpc/rpc.sh@21 -- # jq length 00:09:00.256 16:01:30 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:00.256 16:01:30 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:00.256 16:01:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:00.256 16:01:30 -- common/autotest_common.sh@10 -- # set +x 00:09:00.256 16:01:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:00.257 16:01:30 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:09:00.257 16:01:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:00.257 16:01:30 -- common/autotest_common.sh@10 -- # set +x 00:09:00.257 16:01:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:00.257 16:01:30 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:00.257 16:01:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:00.257 16:01:30 -- common/autotest_common.sh@10 -- # set +x 00:09:00.257 16:01:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:00.257 16:01:30 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:00.257 16:01:30 -- rpc/rpc.sh@26 -- # jq length 00:09:00.514 16:01:30 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:00.514 00:09:00.514 real 0m0.299s 00:09:00.514 user 0m0.175s 00:09:00.514 sys 0m0.051s 00:09:00.514 16:01:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:00.514 16:01:30 -- common/autotest_common.sh@10 -- # set +x 00:09:00.514 ************************************ 00:09:00.514 END TEST rpc_integrity 00:09:00.514 ************************************ 00:09:00.514 16:01:30 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:09:00.514 16:01:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:00.514 16:01:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:00.514 16:01:30 -- common/autotest_common.sh@10 -- # set +x 00:09:00.514 ************************************ 00:09:00.514 START TEST rpc_plugins 00:09:00.514 ************************************ 00:09:00.514 16:01:30 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:09:00.514 16:01:30 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:09:00.515 16:01:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:00.515 16:01:30 -- common/autotest_common.sh@10 -- # set +x 00:09:00.515 16:01:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:00.515 16:01:30 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:09:00.515 16:01:30 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:09:00.515 16:01:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:00.515 16:01:30 -- common/autotest_common.sh@10 -- # set +x 00:09:00.515 16:01:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:00.515 16:01:30 -- rpc/rpc.sh@31 -- # bdevs='[ 00:09:00.515 { 00:09:00.515 "name": "Malloc1", 00:09:00.515 "aliases": [ 00:09:00.515 "d1025547-6cd2-4781-a867-bec2a432d1ba" 00:09:00.515 ], 00:09:00.515 "product_name": "Malloc disk", 00:09:00.515 "block_size": 4096, 00:09:00.515 "num_blocks": 256, 00:09:00.515 "uuid": "d1025547-6cd2-4781-a867-bec2a432d1ba", 00:09:00.515 "assigned_rate_limits": { 00:09:00.515 "rw_ios_per_sec": 0, 00:09:00.515 "rw_mbytes_per_sec": 0, 00:09:00.515 "r_mbytes_per_sec": 0, 00:09:00.515 "w_mbytes_per_sec": 0 00:09:00.515 }, 00:09:00.515 "claimed": false, 00:09:00.515 "zoned": false, 00:09:00.515 "supported_io_types": { 00:09:00.515 "read": true, 00:09:00.515 "write": true, 00:09:00.515 "unmap": true, 00:09:00.515 "write_zeroes": true, 00:09:00.515 "flush": true, 00:09:00.515 "reset": true, 00:09:00.515 "compare": false, 00:09:00.515 "compare_and_write": false, 00:09:00.515 "abort": true, 00:09:00.515 "nvme_admin": false, 00:09:00.515 "nvme_io": false 00:09:00.515 }, 00:09:00.515 "memory_domains": [ 00:09:00.515 { 00:09:00.515 "dma_device_id": "system", 00:09:00.515 "dma_device_type": 1 00:09:00.515 }, 00:09:00.515 { 00:09:00.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.515 "dma_device_type": 2 00:09:00.515 } 00:09:00.515 ], 00:09:00.515 "driver_specific": {} 00:09:00.515 } 00:09:00.515 ]' 00:09:00.515 16:01:30 -- rpc/rpc.sh@32 -- # jq length 00:09:00.515 16:01:30 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:09:00.515 16:01:30 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:09:00.515 16:01:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:00.515 16:01:30 -- common/autotest_common.sh@10 -- # set +x 00:09:00.515 16:01:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:00.515 16:01:30 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:09:00.515 16:01:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:00.515 16:01:30 -- common/autotest_common.sh@10 -- # set +x 00:09:00.515 16:01:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:00.515 16:01:30 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:09:00.515 16:01:30 -- rpc/rpc.sh@36 -- # jq length 00:09:00.773 ************************************ 00:09:00.773 END TEST rpc_plugins 00:09:00.773 ************************************ 00:09:00.773 16:01:30 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:09:00.773 00:09:00.773 real 0m0.155s 00:09:00.773 user 0m0.103s 00:09:00.773 sys 0m0.016s 00:09:00.773 16:01:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:00.773 16:01:30 -- common/autotest_common.sh@10 -- # set +x 00:09:00.773 16:01:30 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:09:00.773 16:01:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:00.773 16:01:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:00.773 16:01:30 -- common/autotest_common.sh@10 -- # set +x 00:09:00.773 ************************************ 00:09:00.773 START TEST rpc_trace_cmd_test 00:09:00.773 ************************************ 00:09:00.773 16:01:30 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:09:00.773 16:01:30 -- rpc/rpc.sh@40 -- # local info 00:09:00.773 16:01:30 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:09:00.773 16:01:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:00.773 16:01:30 -- common/autotest_common.sh@10 -- # set +x 00:09:00.773 16:01:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:00.773 16:01:30 -- rpc/rpc.sh@42 -- # info='{ 00:09:00.773 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid70914", 00:09:00.773 "tpoint_group_mask": "0x8", 00:09:00.773 "iscsi_conn": { 00:09:00.773 "mask": "0x2", 00:09:00.773 "tpoint_mask": "0x0" 00:09:00.773 }, 00:09:00.773 "scsi": { 00:09:00.773 "mask": "0x4", 00:09:00.773 "tpoint_mask": "0x0" 00:09:00.773 }, 00:09:00.773 "bdev": { 00:09:00.773 "mask": "0x8", 00:09:00.773 "tpoint_mask": "0xffffffffffffffff" 00:09:00.773 }, 00:09:00.773 "nvmf_rdma": { 00:09:00.773 "mask": "0x10", 00:09:00.773 "tpoint_mask": "0x0" 00:09:00.773 }, 00:09:00.773 "nvmf_tcp": { 00:09:00.773 "mask": "0x20", 00:09:00.773 "tpoint_mask": "0x0" 00:09:00.773 }, 00:09:00.773 "ftl": { 00:09:00.773 "mask": "0x40", 00:09:00.773 "tpoint_mask": "0x0" 00:09:00.773 }, 00:09:00.773 "blobfs": { 00:09:00.773 "mask": "0x80", 00:09:00.773 "tpoint_mask": "0x0" 00:09:00.773 }, 00:09:00.773 "dsa": { 00:09:00.773 "mask": "0x200", 00:09:00.773 "tpoint_mask": "0x0" 00:09:00.773 }, 00:09:00.773 "thread": { 00:09:00.773 "mask": "0x400", 00:09:00.773 "tpoint_mask": "0x0" 00:09:00.773 }, 00:09:00.773 "nvme_pcie": { 00:09:00.773 "mask": "0x800", 00:09:00.773 "tpoint_mask": "0x0" 00:09:00.773 }, 00:09:00.773 "iaa": { 00:09:00.773 "mask": "0x1000", 00:09:00.773 "tpoint_mask": "0x0" 00:09:00.773 }, 00:09:00.773 "nvme_tcp": { 00:09:00.773 "mask": "0x2000", 00:09:00.773 "tpoint_mask": "0x0" 00:09:00.773 }, 00:09:00.773 "bdev_nvme": { 00:09:00.773 "mask": "0x4000", 00:09:00.773 "tpoint_mask": "0x0" 00:09:00.773 }, 00:09:00.773 "sock": { 00:09:00.773 "mask": "0x8000", 00:09:00.773 "tpoint_mask": "0x0" 00:09:00.773 } 00:09:00.773 }' 00:09:00.773 16:01:30 -- rpc/rpc.sh@43 -- # jq length 00:09:00.773 16:01:30 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:09:00.773 16:01:30 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:09:00.773 16:01:30 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:09:00.773 16:01:30 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:09:01.031 16:01:30 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:09:01.031 16:01:30 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:09:01.031 16:01:30 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:09:01.031 16:01:30 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:09:01.031 16:01:30 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:09:01.031 00:09:01.031 real 0m0.237s 00:09:01.031 user 0m0.193s 00:09:01.031 sys 0m0.032s 00:09:01.031 16:01:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:01.031 16:01:30 -- common/autotest_common.sh@10 -- # set +x 00:09:01.031 ************************************ 00:09:01.031 END TEST rpc_trace_cmd_test 00:09:01.031 ************************************ 00:09:01.031 16:01:30 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:09:01.031 16:01:30 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:09:01.031 16:01:30 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:09:01.031 16:01:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:01.031 16:01:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:01.031 16:01:30 -- common/autotest_common.sh@10 -- # set +x 00:09:01.031 ************************************ 00:09:01.031 START TEST rpc_daemon_integrity 00:09:01.031 ************************************ 00:09:01.031 16:01:30 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:09:01.031 16:01:30 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:01.031 16:01:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:01.031 16:01:30 -- common/autotest_common.sh@10 -- # set +x 00:09:01.031 16:01:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:01.031 16:01:30 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:01.031 16:01:30 -- rpc/rpc.sh@13 -- # jq length 00:09:01.290 16:01:31 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:01.290 16:01:31 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:01.290 16:01:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:01.290 16:01:31 -- common/autotest_common.sh@10 -- # set +x 00:09:01.290 16:01:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:01.290 16:01:31 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:09:01.290 16:01:31 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:01.290 16:01:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:01.290 16:01:31 -- common/autotest_common.sh@10 -- # set +x 00:09:01.290 16:01:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:01.290 16:01:31 -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:01.290 { 00:09:01.290 "name": "Malloc2", 00:09:01.290 "aliases": [ 00:09:01.290 "19e5ba1f-55be-45f2-8dd2-11014191da3a" 00:09:01.290 ], 00:09:01.290 "product_name": "Malloc disk", 00:09:01.290 "block_size": 512, 00:09:01.290 "num_blocks": 16384, 00:09:01.290 "uuid": "19e5ba1f-55be-45f2-8dd2-11014191da3a", 00:09:01.290 "assigned_rate_limits": { 00:09:01.290 "rw_ios_per_sec": 0, 00:09:01.290 "rw_mbytes_per_sec": 0, 00:09:01.290 "r_mbytes_per_sec": 0, 00:09:01.290 "w_mbytes_per_sec": 0 00:09:01.290 }, 00:09:01.290 "claimed": false, 00:09:01.290 "zoned": false, 00:09:01.290 "supported_io_types": { 00:09:01.290 "read": true, 00:09:01.290 "write": true, 00:09:01.290 "unmap": true, 00:09:01.290 "write_zeroes": true, 00:09:01.290 "flush": true, 00:09:01.290 "reset": true, 00:09:01.290 "compare": false, 00:09:01.290 "compare_and_write": false, 00:09:01.290 "abort": true, 00:09:01.290 "nvme_admin": false, 00:09:01.290 "nvme_io": false 00:09:01.290 }, 00:09:01.290 "memory_domains": [ 00:09:01.290 { 00:09:01.290 "dma_device_id": "system", 00:09:01.290 "dma_device_type": 1 00:09:01.290 }, 00:09:01.290 { 00:09:01.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.290 "dma_device_type": 2 00:09:01.290 } 00:09:01.290 ], 00:09:01.290 "driver_specific": {} 00:09:01.290 } 00:09:01.290 ]' 00:09:01.290 16:01:31 -- rpc/rpc.sh@17 -- # jq length 00:09:01.290 16:01:31 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:01.290 16:01:31 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:09:01.290 16:01:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:01.290 16:01:31 -- common/autotest_common.sh@10 -- # set +x 00:09:01.290 [2024-04-15 16:01:31.102422] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:09:01.290 [2024-04-15 16:01:31.102916] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:01.290 [2024-04-15 16:01:31.102994] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc5e970 00:09:01.290 [2024-04-15 16:01:31.103090] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:01.290 [2024-04-15 16:01:31.104521] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:01.290 [2024-04-15 16:01:31.104676] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:01.290 Passthru0 00:09:01.290 16:01:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:01.290 16:01:31 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:01.290 16:01:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:01.290 16:01:31 -- common/autotest_common.sh@10 -- # set +x 00:09:01.290 16:01:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:01.290 16:01:31 -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:01.290 { 00:09:01.290 "name": "Malloc2", 00:09:01.290 "aliases": [ 00:09:01.290 "19e5ba1f-55be-45f2-8dd2-11014191da3a" 00:09:01.290 ], 00:09:01.290 "product_name": "Malloc disk", 00:09:01.290 "block_size": 512, 00:09:01.290 "num_blocks": 16384, 00:09:01.290 "uuid": "19e5ba1f-55be-45f2-8dd2-11014191da3a", 00:09:01.290 "assigned_rate_limits": { 00:09:01.290 "rw_ios_per_sec": 0, 00:09:01.290 "rw_mbytes_per_sec": 0, 00:09:01.290 "r_mbytes_per_sec": 0, 00:09:01.290 "w_mbytes_per_sec": 0 00:09:01.290 }, 00:09:01.290 "claimed": true, 00:09:01.290 "claim_type": "exclusive_write", 00:09:01.290 "zoned": false, 00:09:01.290 "supported_io_types": { 00:09:01.290 "read": true, 00:09:01.290 "write": true, 00:09:01.290 "unmap": true, 00:09:01.290 "write_zeroes": true, 00:09:01.290 "flush": true, 00:09:01.290 "reset": true, 00:09:01.290 "compare": false, 00:09:01.290 "compare_and_write": false, 00:09:01.290 "abort": true, 00:09:01.290 "nvme_admin": false, 00:09:01.290 "nvme_io": false 00:09:01.290 }, 00:09:01.290 "memory_domains": [ 00:09:01.290 { 00:09:01.290 "dma_device_id": "system", 00:09:01.290 "dma_device_type": 1 00:09:01.290 }, 00:09:01.290 { 00:09:01.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.290 "dma_device_type": 2 00:09:01.290 } 00:09:01.290 ], 00:09:01.290 "driver_specific": {} 00:09:01.290 }, 00:09:01.290 { 00:09:01.290 "name": "Passthru0", 00:09:01.290 "aliases": [ 00:09:01.290 "2c5f0a50-b7d6-5f24-b380-8338946ce887" 00:09:01.290 ], 00:09:01.290 "product_name": "passthru", 00:09:01.290 "block_size": 512, 00:09:01.290 "num_blocks": 16384, 00:09:01.290 "uuid": "2c5f0a50-b7d6-5f24-b380-8338946ce887", 00:09:01.290 "assigned_rate_limits": { 00:09:01.290 "rw_ios_per_sec": 0, 00:09:01.290 "rw_mbytes_per_sec": 0, 00:09:01.290 "r_mbytes_per_sec": 0, 00:09:01.290 "w_mbytes_per_sec": 0 00:09:01.290 }, 00:09:01.290 "claimed": false, 00:09:01.290 "zoned": false, 00:09:01.290 "supported_io_types": { 00:09:01.290 "read": true, 00:09:01.290 "write": true, 00:09:01.290 "unmap": true, 00:09:01.290 "write_zeroes": true, 00:09:01.290 "flush": true, 00:09:01.290 "reset": true, 00:09:01.290 "compare": false, 00:09:01.290 "compare_and_write": false, 00:09:01.290 "abort": true, 00:09:01.290 "nvme_admin": false, 00:09:01.290 "nvme_io": false 00:09:01.290 }, 00:09:01.290 "memory_domains": [ 00:09:01.290 { 00:09:01.290 "dma_device_id": "system", 00:09:01.290 "dma_device_type": 1 00:09:01.290 }, 00:09:01.290 { 00:09:01.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.290 "dma_device_type": 2 00:09:01.290 } 00:09:01.290 ], 00:09:01.290 "driver_specific": { 00:09:01.290 "passthru": { 00:09:01.290 "name": "Passthru0", 00:09:01.290 "base_bdev_name": "Malloc2" 00:09:01.290 } 00:09:01.290 } 00:09:01.290 } 00:09:01.290 ]' 00:09:01.290 16:01:31 -- rpc/rpc.sh@21 -- # jq length 00:09:01.290 16:01:31 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:01.290 16:01:31 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:01.290 16:01:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:01.290 16:01:31 -- common/autotest_common.sh@10 -- # set +x 00:09:01.290 16:01:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:01.290 16:01:31 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:09:01.290 16:01:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:01.290 16:01:31 -- common/autotest_common.sh@10 -- # set +x 00:09:01.290 16:01:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:01.290 16:01:31 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:01.290 16:01:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:01.290 16:01:31 -- common/autotest_common.sh@10 -- # set +x 00:09:01.290 16:01:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:01.290 16:01:31 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:01.290 16:01:31 -- rpc/rpc.sh@26 -- # jq length 00:09:01.549 16:01:31 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:01.549 00:09:01.549 real 0m0.298s 00:09:01.549 user 0m0.183s 00:09:01.549 sys 0m0.044s 00:09:01.549 16:01:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:01.549 16:01:31 -- common/autotest_common.sh@10 -- # set +x 00:09:01.549 ************************************ 00:09:01.549 END TEST rpc_daemon_integrity 00:09:01.549 ************************************ 00:09:01.549 16:01:31 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:09:01.549 16:01:31 -- rpc/rpc.sh@84 -- # killprocess 70914 00:09:01.549 16:01:31 -- common/autotest_common.sh@936 -- # '[' -z 70914 ']' 00:09:01.549 16:01:31 -- common/autotest_common.sh@940 -- # kill -0 70914 00:09:01.549 16:01:31 -- common/autotest_common.sh@941 -- # uname 00:09:01.549 16:01:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:01.549 16:01:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70914 00:09:01.549 16:01:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:01.549 16:01:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:01.549 16:01:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70914' 00:09:01.549 killing process with pid 70914 00:09:01.549 16:01:31 -- common/autotest_common.sh@955 -- # kill 70914 00:09:01.549 16:01:31 -- common/autotest_common.sh@960 -- # wait 70914 00:09:01.807 ************************************ 00:09:01.807 END TEST rpc 00:09:01.807 ************************************ 00:09:01.807 00:09:01.807 real 0m3.122s 00:09:01.807 user 0m4.077s 00:09:01.807 sys 0m0.813s 00:09:01.807 16:01:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:01.807 16:01:31 -- common/autotest_common.sh@10 -- # set +x 00:09:01.807 16:01:31 -- spdk/autotest.sh@166 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:09:01.807 16:01:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:01.807 16:01:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:01.807 16:01:31 -- common/autotest_common.sh@10 -- # set +x 00:09:02.065 ************************************ 00:09:02.065 START TEST skip_rpc 00:09:02.065 ************************************ 00:09:02.065 16:01:31 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:09:02.065 * Looking for test storage... 00:09:02.065 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:09:02.065 16:01:31 -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:02.065 16:01:31 -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:02.065 16:01:31 -- rpc/skip_rpc.sh@60 -- # run_test skip_rpc test_skip_rpc 00:09:02.065 16:01:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:02.065 16:01:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:02.065 16:01:31 -- common/autotest_common.sh@10 -- # set +x 00:09:02.065 ************************************ 00:09:02.065 START TEST skip_rpc 00:09:02.065 ************************************ 00:09:02.065 16:01:31 -- common/autotest_common.sh@1111 -- # test_skip_rpc 00:09:02.065 16:01:31 -- rpc/skip_rpc.sh@16 -- # local spdk_pid=71138 00:09:02.065 16:01:31 -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:09:02.065 16:01:31 -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:02.065 16:01:31 -- rpc/skip_rpc.sh@19 -- # sleep 5 00:09:02.065 [2024-04-15 16:01:32.022752] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:09:02.065 [2024-04-15 16:01:32.023076] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71138 ] 00:09:02.323 [2024-04-15 16:01:32.172795] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.323 [2024-04-15 16:01:32.228166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.323 [2024-04-15 16:01:32.228460] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:09:07.599 16:01:36 -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:09:07.599 16:01:36 -- common/autotest_common.sh@638 -- # local es=0 00:09:07.599 16:01:36 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd spdk_get_version 00:09:07.599 16:01:36 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:09:07.599 16:01:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:07.599 16:01:36 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:09:07.599 16:01:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:07.599 16:01:36 -- common/autotest_common.sh@641 -- # rpc_cmd spdk_get_version 00:09:07.599 16:01:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:07.599 16:01:36 -- common/autotest_common.sh@10 -- # set +x 00:09:07.599 16:01:36 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:09:07.599 16:01:36 -- common/autotest_common.sh@641 -- # es=1 00:09:07.600 16:01:36 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:07.600 16:01:36 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:07.600 16:01:36 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:07.600 16:01:36 -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:09:07.600 16:01:36 -- rpc/skip_rpc.sh@23 -- # killprocess 71138 00:09:07.600 16:01:36 -- common/autotest_common.sh@936 -- # '[' -z 71138 ']' 00:09:07.600 16:01:36 -- common/autotest_common.sh@940 -- # kill -0 71138 00:09:07.600 16:01:36 -- common/autotest_common.sh@941 -- # uname 00:09:07.600 16:01:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:07.600 16:01:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71138 00:09:07.600 16:01:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:07.600 killing process with pid 71138 00:09:07.600 16:01:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:07.600 16:01:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71138' 00:09:07.600 16:01:36 -- common/autotest_common.sh@955 -- # kill 71138 00:09:07.600 16:01:36 -- common/autotest_common.sh@960 -- # wait 71138 00:09:07.600 ************************************ 00:09:07.600 END TEST skip_rpc 00:09:07.600 ************************************ 00:09:07.600 00:09:07.600 real 0m5.376s 00:09:07.600 user 0m5.003s 00:09:07.600 sys 0m0.255s 00:09:07.600 16:01:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:07.600 16:01:37 -- common/autotest_common.sh@10 -- # set +x 00:09:07.600 16:01:37 -- rpc/skip_rpc.sh@61 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:09:07.600 16:01:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:07.600 16:01:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:07.600 16:01:37 -- common/autotest_common.sh@10 -- # set +x 00:09:07.600 ************************************ 00:09:07.600 START TEST skip_rpc_with_json 00:09:07.600 ************************************ 00:09:07.600 16:01:37 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_json 00:09:07.600 16:01:37 -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:09:07.600 16:01:37 -- rpc/skip_rpc.sh@28 -- # local spdk_pid=71224 00:09:07.600 16:01:37 -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:07.600 16:01:37 -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:07.600 16:01:37 -- rpc/skip_rpc.sh@31 -- # waitforlisten 71224 00:09:07.600 16:01:37 -- common/autotest_common.sh@817 -- # '[' -z 71224 ']' 00:09:07.600 16:01:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:07.600 16:01:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:07.600 16:01:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:07.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:07.600 16:01:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:07.600 16:01:37 -- common/autotest_common.sh@10 -- # set +x 00:09:07.600 [2024-04-15 16:01:37.518215] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:09:07.600 [2024-04-15 16:01:37.518518] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71224 ] 00:09:07.858 [2024-04-15 16:01:37.670360] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.858 [2024-04-15 16:01:37.721487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.115 16:01:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:08.115 16:01:37 -- common/autotest_common.sh@850 -- # return 0 00:09:08.115 16:01:37 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:09:08.115 16:01:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:08.115 16:01:37 -- common/autotest_common.sh@10 -- # set +x 00:09:08.115 [2024-04-15 16:01:37.928876] nvmf_rpc.c:2500:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:09:08.115 request: 00:09:08.115 { 00:09:08.115 "trtype": "tcp", 00:09:08.115 "method": "nvmf_get_transports", 00:09:08.115 "req_id": 1 00:09:08.115 } 00:09:08.115 Got JSON-RPC error response 00:09:08.115 response: 00:09:08.115 { 00:09:08.115 "code": -19, 00:09:08.115 "message": "No such device" 00:09:08.115 } 00:09:08.115 16:01:37 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:09:08.115 16:01:37 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:09:08.115 16:01:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:08.115 16:01:37 -- common/autotest_common.sh@10 -- # set +x 00:09:08.115 [2024-04-15 16:01:37.941001] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:08.115 16:01:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:08.115 16:01:37 -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:09:08.115 16:01:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:08.115 16:01:37 -- common/autotest_common.sh@10 -- # set +x 00:09:08.372 16:01:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:08.372 16:01:38 -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:08.372 { 00:09:08.372 "subsystems": [ 00:09:08.372 { 00:09:08.372 "subsystem": "keyring", 00:09:08.372 "config": [] 00:09:08.372 }, 00:09:08.372 { 00:09:08.372 "subsystem": "iobuf", 00:09:08.372 "config": [ 00:09:08.372 { 00:09:08.372 "method": "iobuf_set_options", 00:09:08.372 "params": { 00:09:08.372 "small_pool_count": 8192, 00:09:08.372 "large_pool_count": 1024, 00:09:08.372 "small_bufsize": 8192, 00:09:08.372 "large_bufsize": 135168 00:09:08.372 } 00:09:08.372 } 00:09:08.372 ] 00:09:08.372 }, 00:09:08.372 { 00:09:08.372 "subsystem": "sock", 00:09:08.372 "config": [ 00:09:08.372 { 00:09:08.372 "method": "sock_impl_set_options", 00:09:08.372 "params": { 00:09:08.372 "impl_name": "uring", 00:09:08.372 "recv_buf_size": 2097152, 00:09:08.372 "send_buf_size": 2097152, 00:09:08.372 "enable_recv_pipe": true, 00:09:08.372 "enable_quickack": false, 00:09:08.372 "enable_placement_id": 0, 00:09:08.372 "enable_zerocopy_send_server": false, 00:09:08.372 "enable_zerocopy_send_client": false, 00:09:08.372 "zerocopy_threshold": 0, 00:09:08.372 "tls_version": 0, 00:09:08.372 "enable_ktls": false 00:09:08.372 } 00:09:08.372 }, 00:09:08.372 { 00:09:08.372 "method": "sock_impl_set_options", 00:09:08.372 "params": { 00:09:08.372 "impl_name": "posix", 00:09:08.372 "recv_buf_size": 2097152, 00:09:08.372 "send_buf_size": 2097152, 00:09:08.372 "enable_recv_pipe": true, 00:09:08.372 "enable_quickack": false, 00:09:08.372 "enable_placement_id": 0, 00:09:08.372 "enable_zerocopy_send_server": true, 00:09:08.372 "enable_zerocopy_send_client": false, 00:09:08.372 "zerocopy_threshold": 0, 00:09:08.372 "tls_version": 0, 00:09:08.372 "enable_ktls": false 00:09:08.372 } 00:09:08.372 }, 00:09:08.372 { 00:09:08.372 "method": "sock_impl_set_options", 00:09:08.372 "params": { 00:09:08.372 "impl_name": "ssl", 00:09:08.372 "recv_buf_size": 4096, 00:09:08.372 "send_buf_size": 4096, 00:09:08.372 "enable_recv_pipe": true, 00:09:08.372 "enable_quickack": false, 00:09:08.372 "enable_placement_id": 0, 00:09:08.372 "enable_zerocopy_send_server": true, 00:09:08.372 "enable_zerocopy_send_client": false, 00:09:08.372 "zerocopy_threshold": 0, 00:09:08.372 "tls_version": 0, 00:09:08.372 "enable_ktls": false 00:09:08.372 } 00:09:08.372 } 00:09:08.372 ] 00:09:08.372 }, 00:09:08.372 { 00:09:08.372 "subsystem": "vmd", 00:09:08.372 "config": [] 00:09:08.372 }, 00:09:08.372 { 00:09:08.372 "subsystem": "accel", 00:09:08.372 "config": [ 00:09:08.372 { 00:09:08.372 "method": "accel_set_options", 00:09:08.372 "params": { 00:09:08.372 "small_cache_size": 128, 00:09:08.372 "large_cache_size": 16, 00:09:08.372 "task_count": 2048, 00:09:08.372 "sequence_count": 2048, 00:09:08.372 "buf_count": 2048 00:09:08.372 } 00:09:08.372 } 00:09:08.372 ] 00:09:08.372 }, 00:09:08.372 { 00:09:08.372 "subsystem": "bdev", 00:09:08.372 "config": [ 00:09:08.372 { 00:09:08.372 "method": "bdev_set_options", 00:09:08.372 "params": { 00:09:08.372 "bdev_io_pool_size": 65535, 00:09:08.372 "bdev_io_cache_size": 256, 00:09:08.372 "bdev_auto_examine": true, 00:09:08.372 "iobuf_small_cache_size": 128, 00:09:08.372 "iobuf_large_cache_size": 16 00:09:08.372 } 00:09:08.372 }, 00:09:08.372 { 00:09:08.372 "method": "bdev_raid_set_options", 00:09:08.372 "params": { 00:09:08.372 "process_window_size_kb": 1024 00:09:08.372 } 00:09:08.372 }, 00:09:08.372 { 00:09:08.372 "method": "bdev_iscsi_set_options", 00:09:08.372 "params": { 00:09:08.372 "timeout_sec": 30 00:09:08.372 } 00:09:08.372 }, 00:09:08.372 { 00:09:08.372 "method": "bdev_nvme_set_options", 00:09:08.372 "params": { 00:09:08.372 "action_on_timeout": "none", 00:09:08.372 "timeout_us": 0, 00:09:08.372 "timeout_admin_us": 0, 00:09:08.372 "keep_alive_timeout_ms": 10000, 00:09:08.372 "arbitration_burst": 0, 00:09:08.372 "low_priority_weight": 0, 00:09:08.372 "medium_priority_weight": 0, 00:09:08.372 "high_priority_weight": 0, 00:09:08.372 "nvme_adminq_poll_period_us": 10000, 00:09:08.372 "nvme_ioq_poll_period_us": 0, 00:09:08.372 "io_queue_requests": 0, 00:09:08.372 "delay_cmd_submit": true, 00:09:08.372 "transport_retry_count": 4, 00:09:08.372 "bdev_retry_count": 3, 00:09:08.372 "transport_ack_timeout": 0, 00:09:08.372 "ctrlr_loss_timeout_sec": 0, 00:09:08.372 "reconnect_delay_sec": 0, 00:09:08.372 "fast_io_fail_timeout_sec": 0, 00:09:08.372 "disable_auto_failback": false, 00:09:08.372 "generate_uuids": false, 00:09:08.372 "transport_tos": 0, 00:09:08.372 "nvme_error_stat": false, 00:09:08.372 "rdma_srq_size": 0, 00:09:08.372 "io_path_stat": false, 00:09:08.372 "allow_accel_sequence": false, 00:09:08.372 "rdma_max_cq_size": 0, 00:09:08.372 "rdma_cm_event_timeout_ms": 0, 00:09:08.372 "dhchap_digests": [ 00:09:08.372 "sha256", 00:09:08.372 "sha384", 00:09:08.372 "sha512" 00:09:08.372 ], 00:09:08.372 "dhchap_dhgroups": [ 00:09:08.372 "null", 00:09:08.372 "ffdhe2048", 00:09:08.372 "ffdhe3072", 00:09:08.372 "ffdhe4096", 00:09:08.372 "ffdhe6144", 00:09:08.372 "ffdhe8192" 00:09:08.372 ] 00:09:08.372 } 00:09:08.372 }, 00:09:08.372 { 00:09:08.372 "method": "bdev_nvme_set_hotplug", 00:09:08.372 "params": { 00:09:08.372 "period_us": 100000, 00:09:08.372 "enable": false 00:09:08.372 } 00:09:08.372 }, 00:09:08.372 { 00:09:08.372 "method": "bdev_wait_for_examine" 00:09:08.372 } 00:09:08.372 ] 00:09:08.372 }, 00:09:08.372 { 00:09:08.372 "subsystem": "scsi", 00:09:08.372 "config": null 00:09:08.372 }, 00:09:08.372 { 00:09:08.372 "subsystem": "scheduler", 00:09:08.372 "config": [ 00:09:08.372 { 00:09:08.372 "method": "framework_set_scheduler", 00:09:08.372 "params": { 00:09:08.372 "name": "static" 00:09:08.372 } 00:09:08.372 } 00:09:08.372 ] 00:09:08.372 }, 00:09:08.372 { 00:09:08.372 "subsystem": "vhost_scsi", 00:09:08.372 "config": [] 00:09:08.372 }, 00:09:08.372 { 00:09:08.372 "subsystem": "vhost_blk", 00:09:08.372 "config": [] 00:09:08.372 }, 00:09:08.372 { 00:09:08.372 "subsystem": "ublk", 00:09:08.373 "config": [] 00:09:08.373 }, 00:09:08.373 { 00:09:08.373 "subsystem": "nbd", 00:09:08.373 "config": [] 00:09:08.373 }, 00:09:08.373 { 00:09:08.373 "subsystem": "nvmf", 00:09:08.373 "config": [ 00:09:08.373 { 00:09:08.373 "method": "nvmf_set_config", 00:09:08.373 "params": { 00:09:08.373 "discovery_filter": "match_any", 00:09:08.373 "admin_cmd_passthru": { 00:09:08.373 "identify_ctrlr": false 00:09:08.373 } 00:09:08.373 } 00:09:08.373 }, 00:09:08.373 { 00:09:08.373 "method": "nvmf_set_max_subsystems", 00:09:08.373 "params": { 00:09:08.373 "max_subsystems": 1024 00:09:08.373 } 00:09:08.373 }, 00:09:08.373 { 00:09:08.373 "method": "nvmf_set_crdt", 00:09:08.373 "params": { 00:09:08.373 "crdt1": 0, 00:09:08.373 "crdt2": 0, 00:09:08.373 "crdt3": 0 00:09:08.373 } 00:09:08.373 }, 00:09:08.373 { 00:09:08.373 "method": "nvmf_create_transport", 00:09:08.373 "params": { 00:09:08.373 "trtype": "TCP", 00:09:08.373 "max_queue_depth": 128, 00:09:08.373 "max_io_qpairs_per_ctrlr": 127, 00:09:08.373 "in_capsule_data_size": 4096, 00:09:08.373 "max_io_size": 131072, 00:09:08.373 "io_unit_size": 131072, 00:09:08.373 "max_aq_depth": 128, 00:09:08.373 "num_shared_buffers": 511, 00:09:08.373 "buf_cache_size": 4294967295, 00:09:08.373 "dif_insert_or_strip": false, 00:09:08.373 "zcopy": false, 00:09:08.373 "c2h_success": true, 00:09:08.373 "sock_priority": 0, 00:09:08.373 "abort_timeout_sec": 1, 00:09:08.373 "ack_timeout": 0 00:09:08.373 } 00:09:08.373 } 00:09:08.373 ] 00:09:08.373 }, 00:09:08.373 { 00:09:08.373 "subsystem": "iscsi", 00:09:08.373 "config": [ 00:09:08.373 { 00:09:08.373 "method": "iscsi_set_options", 00:09:08.373 "params": { 00:09:08.373 "node_base": "iqn.2016-06.io.spdk", 00:09:08.373 "max_sessions": 128, 00:09:08.373 "max_connections_per_session": 2, 00:09:08.373 "max_queue_depth": 64, 00:09:08.373 "default_time2wait": 2, 00:09:08.373 "default_time2retain": 20, 00:09:08.373 "first_burst_length": 8192, 00:09:08.373 "immediate_data": true, 00:09:08.373 "allow_duplicated_isid": false, 00:09:08.373 "error_recovery_level": 0, 00:09:08.373 "nop_timeout": 60, 00:09:08.373 "nop_in_interval": 30, 00:09:08.373 "disable_chap": false, 00:09:08.373 "require_chap": false, 00:09:08.373 "mutual_chap": false, 00:09:08.373 "chap_group": 0, 00:09:08.373 "max_large_datain_per_connection": 64, 00:09:08.373 "max_r2t_per_connection": 4, 00:09:08.373 "pdu_pool_size": 36864, 00:09:08.373 "immediate_data_pool_size": 16384, 00:09:08.373 "data_out_pool_size": 2048 00:09:08.373 } 00:09:08.373 } 00:09:08.373 ] 00:09:08.373 } 00:09:08.373 ] 00:09:08.373 } 00:09:08.373 16:01:38 -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:08.373 16:01:38 -- rpc/skip_rpc.sh@40 -- # killprocess 71224 00:09:08.373 16:01:38 -- common/autotest_common.sh@936 -- # '[' -z 71224 ']' 00:09:08.373 16:01:38 -- common/autotest_common.sh@940 -- # kill -0 71224 00:09:08.373 16:01:38 -- common/autotest_common.sh@941 -- # uname 00:09:08.373 16:01:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:08.373 16:01:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71224 00:09:08.373 16:01:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:08.373 16:01:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:08.373 16:01:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71224' 00:09:08.373 killing process with pid 71224 00:09:08.373 16:01:38 -- common/autotest_common.sh@955 -- # kill 71224 00:09:08.373 16:01:38 -- common/autotest_common.sh@960 -- # wait 71224 00:09:08.630 16:01:38 -- rpc/skip_rpc.sh@47 -- # local spdk_pid=71244 00:09:08.630 16:01:38 -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:08.630 16:01:38 -- rpc/skip_rpc.sh@48 -- # sleep 5 00:09:13.913 16:01:43 -- rpc/skip_rpc.sh@50 -- # killprocess 71244 00:09:13.913 16:01:43 -- common/autotest_common.sh@936 -- # '[' -z 71244 ']' 00:09:13.913 16:01:43 -- common/autotest_common.sh@940 -- # kill -0 71244 00:09:13.913 16:01:43 -- common/autotest_common.sh@941 -- # uname 00:09:13.913 16:01:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:13.913 16:01:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71244 00:09:13.913 16:01:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:13.913 16:01:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:13.913 16:01:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71244' 00:09:13.913 killing process with pid 71244 00:09:13.913 16:01:43 -- common/autotest_common.sh@955 -- # kill 71244 00:09:13.913 16:01:43 -- common/autotest_common.sh@960 -- # wait 71244 00:09:13.913 16:01:43 -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:13.913 16:01:43 -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:13.913 00:09:13.913 real 0m6.400s 00:09:13.913 user 0m5.988s 00:09:13.913 sys 0m0.549s 00:09:13.913 16:01:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:13.913 16:01:43 -- common/autotest_common.sh@10 -- # set +x 00:09:13.913 ************************************ 00:09:13.913 END TEST skip_rpc_with_json 00:09:13.913 ************************************ 00:09:14.171 16:01:43 -- rpc/skip_rpc.sh@62 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:09:14.171 16:01:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:14.171 16:01:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:14.171 16:01:43 -- common/autotest_common.sh@10 -- # set +x 00:09:14.171 ************************************ 00:09:14.171 START TEST skip_rpc_with_delay 00:09:14.171 ************************************ 00:09:14.171 16:01:43 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_delay 00:09:14.171 16:01:43 -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:14.171 16:01:43 -- common/autotest_common.sh@638 -- # local es=0 00:09:14.171 16:01:43 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:14.171 16:01:43 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:14.171 16:01:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:14.171 16:01:43 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:14.171 16:01:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:14.171 16:01:43 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:14.171 16:01:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:14.171 16:01:43 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:14.171 16:01:43 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:09:14.171 16:01:43 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:14.171 [2024-04-15 16:01:44.047228] app.c: 751:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:09:14.171 [2024-04-15 16:01:44.048478] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:09:14.171 16:01:44 -- common/autotest_common.sh@641 -- # es=1 00:09:14.171 16:01:44 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:14.171 16:01:44 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:14.171 16:01:44 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:14.171 00:09:14.171 real 0m0.098s 00:09:14.171 user 0m0.058s 00:09:14.171 sys 0m0.034s 00:09:14.171 16:01:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:14.171 16:01:44 -- common/autotest_common.sh@10 -- # set +x 00:09:14.171 ************************************ 00:09:14.171 END TEST skip_rpc_with_delay 00:09:14.171 ************************************ 00:09:14.171 16:01:44 -- rpc/skip_rpc.sh@64 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:14.171 00:09:14.171 real 0m12.337s 00:09:14.171 user 0m11.198s 00:09:14.171 sys 0m1.104s 00:09:14.171 16:01:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:14.171 ************************************ 00:09:14.171 END TEST skip_rpc 00:09:14.171 ************************************ 00:09:14.171 16:01:44 -- common/autotest_common.sh@10 -- # set +x 00:09:14.429 16:01:44 -- spdk/autotest.sh@167 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:14.430 16:01:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:14.430 16:01:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:14.430 16:01:44 -- common/autotest_common.sh@10 -- # set +x 00:09:14.430 ************************************ 00:09:14.430 START TEST rpc_client 00:09:14.430 ************************************ 00:09:14.430 16:01:44 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:14.430 * Looking for test storage... 00:09:14.430 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:09:14.430 16:01:44 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:09:14.430 OK 00:09:14.430 16:01:44 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:09:14.430 00:09:14.430 real 0m0.117s 00:09:14.430 user 0m0.044s 00:09:14.430 sys 0m0.079s 00:09:14.430 16:01:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:14.430 16:01:44 -- common/autotest_common.sh@10 -- # set +x 00:09:14.430 ************************************ 00:09:14.430 END TEST rpc_client 00:09:14.430 ************************************ 00:09:14.430 16:01:44 -- spdk/autotest.sh@168 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:14.430 16:01:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:14.430 16:01:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:14.430 16:01:44 -- common/autotest_common.sh@10 -- # set +x 00:09:14.688 ************************************ 00:09:14.688 START TEST json_config 00:09:14.688 ************************************ 00:09:14.688 16:01:44 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:14.688 16:01:44 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:14.688 16:01:44 -- nvmf/common.sh@7 -- # uname -s 00:09:14.688 16:01:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:14.688 16:01:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:14.688 16:01:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:14.688 16:01:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:14.688 16:01:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:14.688 16:01:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:14.688 16:01:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:14.688 16:01:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:14.688 16:01:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:14.688 16:01:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:14.688 16:01:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5cf30adc-e0ee-4255-9853-178d7d30983a 00:09:14.688 16:01:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=5cf30adc-e0ee-4255-9853-178d7d30983a 00:09:14.688 16:01:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:14.688 16:01:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:14.688 16:01:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:14.688 16:01:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:14.688 16:01:44 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:14.688 16:01:44 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:14.688 16:01:44 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:14.688 16:01:44 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:14.688 16:01:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.688 16:01:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.688 16:01:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.688 16:01:44 -- paths/export.sh@5 -- # export PATH 00:09:14.688 16:01:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.688 16:01:44 -- nvmf/common.sh@47 -- # : 0 00:09:14.688 16:01:44 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:14.688 16:01:44 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:14.688 16:01:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:14.688 16:01:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:14.688 16:01:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:14.688 16:01:44 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:14.688 16:01:44 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:14.688 16:01:44 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:14.688 16:01:44 -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:09:14.688 16:01:44 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:09:14.688 16:01:44 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:09:14.688 16:01:44 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:09:14.688 16:01:44 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:09:14.688 16:01:44 -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:09:14.688 16:01:44 -- json_config/json_config.sh@31 -- # declare -A app_pid 00:09:14.688 16:01:44 -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:09:14.688 16:01:44 -- json_config/json_config.sh@32 -- # declare -A app_socket 00:09:14.688 16:01:44 -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:09:14.688 16:01:44 -- json_config/json_config.sh@33 -- # declare -A app_params 00:09:14.688 16:01:44 -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:09:14.688 16:01:44 -- json_config/json_config.sh@34 -- # declare -A configs_path 00:09:14.688 16:01:44 -- json_config/json_config.sh@40 -- # last_event_id=0 00:09:14.688 16:01:44 -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:14.688 16:01:44 -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:09:14.688 INFO: JSON configuration test init 00:09:14.688 16:01:44 -- json_config/json_config.sh@357 -- # json_config_test_init 00:09:14.688 16:01:44 -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:09:14.688 16:01:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:14.688 16:01:44 -- common/autotest_common.sh@10 -- # set +x 00:09:14.688 16:01:44 -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:09:14.688 16:01:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:14.688 16:01:44 -- common/autotest_common.sh@10 -- # set +x 00:09:14.688 16:01:44 -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:09:14.688 16:01:44 -- json_config/common.sh@9 -- # local app=target 00:09:14.688 16:01:44 -- json_config/common.sh@10 -- # shift 00:09:14.688 16:01:44 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:14.688 16:01:44 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:14.688 16:01:44 -- json_config/common.sh@15 -- # local app_extra_params= 00:09:14.688 16:01:44 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:14.688 16:01:44 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:14.688 16:01:44 -- json_config/common.sh@22 -- # app_pid["$app"]=71454 00:09:14.688 16:01:44 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:14.688 Waiting for target to run... 00:09:14.688 16:01:44 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:09:14.688 16:01:44 -- json_config/common.sh@25 -- # waitforlisten 71454 /var/tmp/spdk_tgt.sock 00:09:14.688 16:01:44 -- common/autotest_common.sh@817 -- # '[' -z 71454 ']' 00:09:14.688 16:01:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:14.688 16:01:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:14.688 16:01:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:14.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:14.688 16:01:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:14.688 16:01:44 -- common/autotest_common.sh@10 -- # set +x 00:09:14.946 [2024-04-15 16:01:44.668168] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:09:14.946 [2024-04-15 16:01:44.668451] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71454 ] 00:09:15.204 [2024-04-15 16:01:45.053316] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.204 [2024-04-15 16:01:45.085244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.768 00:09:15.768 16:01:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:15.768 16:01:45 -- common/autotest_common.sh@850 -- # return 0 00:09:15.768 16:01:45 -- json_config/common.sh@26 -- # echo '' 00:09:15.768 16:01:45 -- json_config/json_config.sh@269 -- # create_accel_config 00:09:15.768 16:01:45 -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:09:15.768 16:01:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:15.768 16:01:45 -- common/autotest_common.sh@10 -- # set +x 00:09:15.768 16:01:45 -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:09:15.768 16:01:45 -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:09:15.768 16:01:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:15.768 16:01:45 -- common/autotest_common.sh@10 -- # set +x 00:09:15.768 16:01:45 -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:09:15.768 16:01:45 -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:09:15.768 16:01:45 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:09:16.333 16:01:46 -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:09:16.333 16:01:46 -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:09:16.333 16:01:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:16.333 16:01:46 -- common/autotest_common.sh@10 -- # set +x 00:09:16.333 16:01:46 -- json_config/json_config.sh@45 -- # local ret=0 00:09:16.333 16:01:46 -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:09:16.333 16:01:46 -- json_config/json_config.sh@46 -- # local enabled_types 00:09:16.333 16:01:46 -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:09:16.333 16:01:46 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:09:16.333 16:01:46 -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:09:16.590 16:01:46 -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:09:16.590 16:01:46 -- json_config/json_config.sh@48 -- # local get_types 00:09:16.590 16:01:46 -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:09:16.590 16:01:46 -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:09:16.590 16:01:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:16.590 16:01:46 -- common/autotest_common.sh@10 -- # set +x 00:09:16.590 16:01:46 -- json_config/json_config.sh@55 -- # return 0 00:09:16.590 16:01:46 -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:09:16.590 16:01:46 -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:09:16.590 16:01:46 -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:09:16.590 16:01:46 -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:09:16.590 16:01:46 -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:09:16.590 16:01:46 -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:09:16.590 16:01:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:16.590 16:01:46 -- common/autotest_common.sh@10 -- # set +x 00:09:16.590 16:01:46 -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:09:16.590 16:01:46 -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:09:16.590 16:01:46 -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:09:16.590 16:01:46 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:09:16.590 16:01:46 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:09:16.847 MallocForNvmf0 00:09:16.847 16:01:46 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:09:16.847 16:01:46 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:09:17.105 MallocForNvmf1 00:09:17.105 16:01:46 -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:09:17.105 16:01:46 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:09:17.363 [2024-04-15 16:01:47.305704] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:17.363 16:01:47 -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:17.620 16:01:47 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:17.876 16:01:47 -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:09:17.876 16:01:47 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:09:18.134 16:01:47 -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:09:18.134 16:01:47 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:09:18.392 16:01:48 -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:09:18.392 16:01:48 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:09:18.650 [2024-04-15 16:01:48.558391] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:09:18.650 16:01:48 -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:09:18.650 16:01:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:18.650 16:01:48 -- common/autotest_common.sh@10 -- # set +x 00:09:18.908 16:01:48 -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:09:18.908 16:01:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:18.908 16:01:48 -- common/autotest_common.sh@10 -- # set +x 00:09:18.908 16:01:48 -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:09:18.908 16:01:48 -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:09:18.908 16:01:48 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:09:19.167 MallocBdevForConfigChangeCheck 00:09:19.167 16:01:48 -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:09:19.167 16:01:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:19.167 16:01:48 -- common/autotest_common.sh@10 -- # set +x 00:09:19.167 16:01:49 -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:09:19.167 16:01:49 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:19.734 INFO: shutting down applications... 00:09:19.734 16:01:49 -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:09:19.734 16:01:49 -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:09:19.734 16:01:49 -- json_config/json_config.sh@368 -- # json_config_clear target 00:09:19.734 16:01:49 -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:09:19.734 16:01:49 -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:09:19.992 Calling clear_iscsi_subsystem 00:09:19.992 Calling clear_nvmf_subsystem 00:09:19.992 Calling clear_nbd_subsystem 00:09:19.992 Calling clear_ublk_subsystem 00:09:19.992 Calling clear_vhost_blk_subsystem 00:09:19.992 Calling clear_vhost_scsi_subsystem 00:09:19.992 Calling clear_bdev_subsystem 00:09:19.992 16:01:49 -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:09:19.992 16:01:49 -- json_config/json_config.sh@343 -- # count=100 00:09:19.992 16:01:49 -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:09:19.992 16:01:49 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:19.992 16:01:49 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:09:19.992 16:01:49 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:09:20.249 16:01:50 -- json_config/json_config.sh@345 -- # break 00:09:20.250 16:01:50 -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:09:20.250 16:01:50 -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:09:20.250 16:01:50 -- json_config/common.sh@31 -- # local app=target 00:09:20.250 16:01:50 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:09:20.250 16:01:50 -- json_config/common.sh@35 -- # [[ -n 71454 ]] 00:09:20.250 16:01:50 -- json_config/common.sh@38 -- # kill -SIGINT 71454 00:09:20.250 16:01:50 -- json_config/common.sh@40 -- # (( i = 0 )) 00:09:20.250 16:01:50 -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:20.250 16:01:50 -- json_config/common.sh@41 -- # kill -0 71454 00:09:20.250 16:01:50 -- json_config/common.sh@45 -- # sleep 0.5 00:09:20.816 16:01:50 -- json_config/common.sh@40 -- # (( i++ )) 00:09:20.816 16:01:50 -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:20.816 16:01:50 -- json_config/common.sh@41 -- # kill -0 71454 00:09:20.816 16:01:50 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:09:20.816 16:01:50 -- json_config/common.sh@43 -- # break 00:09:20.816 16:01:50 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:09:20.816 16:01:50 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:09:20.816 SPDK target shutdown done 00:09:20.816 16:01:50 -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:09:20.816 INFO: relaunching applications... 00:09:20.816 16:01:50 -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:20.816 16:01:50 -- json_config/common.sh@9 -- # local app=target 00:09:20.816 16:01:50 -- json_config/common.sh@10 -- # shift 00:09:20.816 16:01:50 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:20.816 16:01:50 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:20.816 16:01:50 -- json_config/common.sh@15 -- # local app_extra_params= 00:09:20.816 16:01:50 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:20.816 16:01:50 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:20.816 16:01:50 -- json_config/common.sh@22 -- # app_pid["$app"]=71650 00:09:20.816 16:01:50 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:20.816 16:01:50 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:20.816 Waiting for target to run... 00:09:20.816 16:01:50 -- json_config/common.sh@25 -- # waitforlisten 71650 /var/tmp/spdk_tgt.sock 00:09:20.816 16:01:50 -- common/autotest_common.sh@817 -- # '[' -z 71650 ']' 00:09:20.816 16:01:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:20.816 16:01:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:20.816 16:01:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:20.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:20.816 16:01:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:20.816 16:01:50 -- common/autotest_common.sh@10 -- # set +x 00:09:20.816 [2024-04-15 16:01:50.747876] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:09:20.816 [2024-04-15 16:01:50.748136] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71650 ] 00:09:21.383 [2024-04-15 16:01:51.100831] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.383 [2024-04-15 16:01:51.129271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.641 [2024-04-15 16:01:51.423029] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:21.641 [2024-04-15 16:01:51.455112] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:09:21.898 00:09:21.898 INFO: Checking if target configuration is the same... 00:09:21.898 16:01:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:21.898 16:01:51 -- common/autotest_common.sh@850 -- # return 0 00:09:21.898 16:01:51 -- json_config/common.sh@26 -- # echo '' 00:09:21.898 16:01:51 -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:09:21.898 16:01:51 -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:09:21.898 16:01:51 -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:21.898 16:01:51 -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:09:21.898 16:01:51 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:21.898 + '[' 2 -ne 2 ']' 00:09:21.898 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:09:21.898 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:09:21.898 + rootdir=/home/vagrant/spdk_repo/spdk 00:09:21.898 +++ basename /dev/fd/62 00:09:21.898 ++ mktemp /tmp/62.XXX 00:09:21.898 + tmp_file_1=/tmp/62.eIy 00:09:21.898 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:21.898 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:09:21.898 + tmp_file_2=/tmp/spdk_tgt_config.json.xwq 00:09:21.898 + ret=0 00:09:21.898 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:22.482 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:22.482 + diff -u /tmp/62.eIy /tmp/spdk_tgt_config.json.xwq 00:09:22.482 + echo 'INFO: JSON config files are the same' 00:09:22.482 INFO: JSON config files are the same 00:09:22.482 + rm /tmp/62.eIy /tmp/spdk_tgt_config.json.xwq 00:09:22.482 + exit 0 00:09:22.482 INFO: changing configuration and checking if this can be detected... 00:09:22.482 16:01:52 -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:09:22.482 16:01:52 -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:09:22.482 16:01:52 -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:09:22.482 16:01:52 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:09:22.739 16:01:52 -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:22.739 16:01:52 -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:09:22.739 16:01:52 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:22.739 + '[' 2 -ne 2 ']' 00:09:22.739 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:09:22.739 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:09:22.739 + rootdir=/home/vagrant/spdk_repo/spdk 00:09:22.739 +++ basename /dev/fd/62 00:09:22.739 ++ mktemp /tmp/62.XXX 00:09:22.739 + tmp_file_1=/tmp/62.vxT 00:09:22.739 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:22.739 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:09:22.739 + tmp_file_2=/tmp/spdk_tgt_config.json.54O 00:09:22.739 + ret=0 00:09:22.739 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:23.304 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:23.304 + diff -u /tmp/62.vxT /tmp/spdk_tgt_config.json.54O 00:09:23.304 + ret=1 00:09:23.304 + echo '=== Start of file: /tmp/62.vxT ===' 00:09:23.304 + cat /tmp/62.vxT 00:09:23.304 + echo '=== End of file: /tmp/62.vxT ===' 00:09:23.304 + echo '' 00:09:23.304 + echo '=== Start of file: /tmp/spdk_tgt_config.json.54O ===' 00:09:23.304 + cat /tmp/spdk_tgt_config.json.54O 00:09:23.304 + echo '=== End of file: /tmp/spdk_tgt_config.json.54O ===' 00:09:23.304 + echo '' 00:09:23.304 + rm /tmp/62.vxT /tmp/spdk_tgt_config.json.54O 00:09:23.304 + exit 1 00:09:23.304 INFO: configuration change detected. 00:09:23.304 16:01:53 -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:09:23.304 16:01:53 -- json_config/json_config.sh@394 -- # json_config_test_fini 00:09:23.304 16:01:53 -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:09:23.304 16:01:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:23.304 16:01:53 -- common/autotest_common.sh@10 -- # set +x 00:09:23.304 16:01:53 -- json_config/json_config.sh@307 -- # local ret=0 00:09:23.304 16:01:53 -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:09:23.304 16:01:53 -- json_config/json_config.sh@317 -- # [[ -n 71650 ]] 00:09:23.304 16:01:53 -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:09:23.304 16:01:53 -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:09:23.304 16:01:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:23.304 16:01:53 -- common/autotest_common.sh@10 -- # set +x 00:09:23.304 16:01:53 -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:09:23.304 16:01:53 -- json_config/json_config.sh@193 -- # uname -s 00:09:23.304 16:01:53 -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:09:23.304 16:01:53 -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:09:23.304 16:01:53 -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:09:23.305 16:01:53 -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:09:23.305 16:01:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:23.305 16:01:53 -- common/autotest_common.sh@10 -- # set +x 00:09:23.305 16:01:53 -- json_config/json_config.sh@323 -- # killprocess 71650 00:09:23.305 16:01:53 -- common/autotest_common.sh@936 -- # '[' -z 71650 ']' 00:09:23.305 16:01:53 -- common/autotest_common.sh@940 -- # kill -0 71650 00:09:23.305 16:01:53 -- common/autotest_common.sh@941 -- # uname 00:09:23.305 16:01:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:23.305 16:01:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71650 00:09:23.305 killing process with pid 71650 00:09:23.305 16:01:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:23.305 16:01:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:23.305 16:01:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71650' 00:09:23.305 16:01:53 -- common/autotest_common.sh@955 -- # kill 71650 00:09:23.305 16:01:53 -- common/autotest_common.sh@960 -- # wait 71650 00:09:23.563 16:01:53 -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:23.563 16:01:53 -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:09:23.563 16:01:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:23.563 16:01:53 -- common/autotest_common.sh@10 -- # set +x 00:09:23.563 16:01:53 -- json_config/json_config.sh@328 -- # return 0 00:09:23.563 16:01:53 -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:09:23.563 INFO: Success 00:09:23.563 ************************************ 00:09:23.563 END TEST json_config 00:09:23.563 ************************************ 00:09:23.563 00:09:23.563 real 0m8.954s 00:09:23.563 user 0m13.016s 00:09:23.563 sys 0m1.880s 00:09:23.563 16:01:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:23.563 16:01:53 -- common/autotest_common.sh@10 -- # set +x 00:09:23.563 16:01:53 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:09:23.563 16:01:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:23.563 16:01:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:23.563 16:01:53 -- common/autotest_common.sh@10 -- # set +x 00:09:23.823 ************************************ 00:09:23.823 START TEST json_config_extra_key 00:09:23.823 ************************************ 00:09:23.823 16:01:53 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:09:23.823 16:01:53 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:23.823 16:01:53 -- nvmf/common.sh@7 -- # uname -s 00:09:23.823 16:01:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:23.823 16:01:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:23.823 16:01:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:23.823 16:01:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:23.823 16:01:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:23.823 16:01:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:23.823 16:01:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:23.823 16:01:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:23.823 16:01:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:23.823 16:01:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:23.823 16:01:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5cf30adc-e0ee-4255-9853-178d7d30983a 00:09:23.823 16:01:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=5cf30adc-e0ee-4255-9853-178d7d30983a 00:09:23.823 16:01:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:23.823 16:01:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:23.823 16:01:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:23.823 16:01:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:23.823 16:01:53 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:23.823 16:01:53 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:23.823 16:01:53 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:23.823 16:01:53 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:23.823 16:01:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.823 16:01:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.823 16:01:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.823 16:01:53 -- paths/export.sh@5 -- # export PATH 00:09:23.823 16:01:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.823 16:01:53 -- nvmf/common.sh@47 -- # : 0 00:09:23.823 16:01:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:23.823 16:01:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:23.823 16:01:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:23.823 16:01:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:23.823 16:01:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:23.823 16:01:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:23.823 16:01:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:23.823 16:01:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:23.823 16:01:53 -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:09:23.823 16:01:53 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:09:23.823 16:01:53 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:09:23.823 16:01:53 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:09:23.823 16:01:53 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:09:23.823 16:01:53 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:09:23.823 16:01:53 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:09:23.823 16:01:53 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:09:23.823 16:01:53 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:09:23.823 16:01:53 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:23.823 16:01:53 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:09:23.823 INFO: launching applications... 00:09:23.823 16:01:53 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:09:23.823 16:01:53 -- json_config/common.sh@9 -- # local app=target 00:09:23.823 16:01:53 -- json_config/common.sh@10 -- # shift 00:09:23.823 16:01:53 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:23.823 16:01:53 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:23.823 16:01:53 -- json_config/common.sh@15 -- # local app_extra_params= 00:09:23.823 16:01:53 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:23.823 16:01:53 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:23.823 16:01:53 -- json_config/common.sh@22 -- # app_pid["$app"]=71802 00:09:23.823 16:01:53 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:23.823 16:01:53 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:09:23.823 Waiting for target to run... 00:09:23.823 16:01:53 -- json_config/common.sh@25 -- # waitforlisten 71802 /var/tmp/spdk_tgt.sock 00:09:23.823 16:01:53 -- common/autotest_common.sh@817 -- # '[' -z 71802 ']' 00:09:23.823 16:01:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:23.823 16:01:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:23.823 16:01:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:23.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:23.823 16:01:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:23.823 16:01:53 -- common/autotest_common.sh@10 -- # set +x 00:09:23.823 [2024-04-15 16:01:53.711997] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:09:23.823 [2024-04-15 16:01:53.712282] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71802 ] 00:09:24.389 [2024-04-15 16:01:54.077123] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.389 [2024-04-15 16:01:54.120746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.956 00:09:24.956 INFO: shutting down applications... 00:09:24.956 16:01:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:24.956 16:01:54 -- common/autotest_common.sh@850 -- # return 0 00:09:24.956 16:01:54 -- json_config/common.sh@26 -- # echo '' 00:09:24.956 16:01:54 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:09:24.956 16:01:54 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:09:24.956 16:01:54 -- json_config/common.sh@31 -- # local app=target 00:09:24.956 16:01:54 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:09:24.956 16:01:54 -- json_config/common.sh@35 -- # [[ -n 71802 ]] 00:09:24.956 16:01:54 -- json_config/common.sh@38 -- # kill -SIGINT 71802 00:09:24.956 16:01:54 -- json_config/common.sh@40 -- # (( i = 0 )) 00:09:24.956 16:01:54 -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:24.956 16:01:54 -- json_config/common.sh@41 -- # kill -0 71802 00:09:24.956 16:01:54 -- json_config/common.sh@45 -- # sleep 0.5 00:09:25.522 16:01:55 -- json_config/common.sh@40 -- # (( i++ )) 00:09:25.522 16:01:55 -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:25.522 16:01:55 -- json_config/common.sh@41 -- # kill -0 71802 00:09:25.522 16:01:55 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:09:25.522 16:01:55 -- json_config/common.sh@43 -- # break 00:09:25.522 16:01:55 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:09:25.522 16:01:55 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:09:25.522 SPDK target shutdown done 00:09:25.522 16:01:55 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:09:25.522 Success 00:09:25.522 ************************************ 00:09:25.522 END TEST json_config_extra_key 00:09:25.522 ************************************ 00:09:25.522 00:09:25.522 real 0m1.678s 00:09:25.522 user 0m1.575s 00:09:25.522 sys 0m0.391s 00:09:25.522 16:01:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:25.522 16:01:55 -- common/autotest_common.sh@10 -- # set +x 00:09:25.522 16:01:55 -- spdk/autotest.sh@170 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:25.522 16:01:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:25.522 16:01:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:25.522 16:01:55 -- common/autotest_common.sh@10 -- # set +x 00:09:25.522 ************************************ 00:09:25.522 START TEST alias_rpc 00:09:25.522 ************************************ 00:09:25.522 16:01:55 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:25.522 * Looking for test storage... 00:09:25.522 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:09:25.522 16:01:55 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:25.522 16:01:55 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=71877 00:09:25.522 16:01:55 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:25.522 16:01:55 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 71877 00:09:25.522 16:01:55 -- common/autotest_common.sh@817 -- # '[' -z 71877 ']' 00:09:25.522 16:01:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.522 16:01:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:25.522 16:01:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.522 16:01:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:25.522 16:01:55 -- common/autotest_common.sh@10 -- # set +x 00:09:25.780 [2024-04-15 16:01:55.541174] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:09:25.780 [2024-04-15 16:01:55.542063] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71877 ] 00:09:25.780 [2024-04-15 16:01:55.683354] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.090 [2024-04-15 16:01:55.754578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.662 16:01:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:26.662 16:01:56 -- common/autotest_common.sh@850 -- # return 0 00:09:26.662 16:01:56 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:09:26.920 16:01:56 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 71877 00:09:26.920 16:01:56 -- common/autotest_common.sh@936 -- # '[' -z 71877 ']' 00:09:26.920 16:01:56 -- common/autotest_common.sh@940 -- # kill -0 71877 00:09:26.920 16:01:56 -- common/autotest_common.sh@941 -- # uname 00:09:26.920 16:01:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:26.920 16:01:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71877 00:09:26.920 16:01:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:26.920 16:01:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:26.920 16:01:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71877' 00:09:26.920 killing process with pid 71877 00:09:26.920 16:01:56 -- common/autotest_common.sh@955 -- # kill 71877 00:09:26.920 16:01:56 -- common/autotest_common.sh@960 -- # wait 71877 00:09:27.490 ************************************ 00:09:27.490 END TEST alias_rpc 00:09:27.490 ************************************ 00:09:27.490 00:09:27.490 real 0m1.778s 00:09:27.490 user 0m2.035s 00:09:27.490 sys 0m0.425s 00:09:27.490 16:01:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:27.490 16:01:57 -- common/autotest_common.sh@10 -- # set +x 00:09:27.490 16:01:57 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:09:27.490 16:01:57 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:09:27.490 16:01:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:27.490 16:01:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:27.490 16:01:57 -- common/autotest_common.sh@10 -- # set +x 00:09:27.490 ************************************ 00:09:27.490 START TEST spdkcli_tcp 00:09:27.490 ************************************ 00:09:27.490 16:01:57 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:09:27.490 * Looking for test storage... 00:09:27.490 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:09:27.490 16:01:57 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:09:27.490 16:01:57 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:09:27.490 16:01:57 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:09:27.490 16:01:57 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:09:27.490 16:01:57 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:09:27.490 16:01:57 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:09:27.490 16:01:57 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:09:27.490 16:01:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:27.490 16:01:57 -- common/autotest_common.sh@10 -- # set +x 00:09:27.490 16:01:57 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=71953 00:09:27.491 16:01:57 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:09:27.491 16:01:57 -- spdkcli/tcp.sh@27 -- # waitforlisten 71953 00:09:27.491 16:01:57 -- common/autotest_common.sh@817 -- # '[' -z 71953 ']' 00:09:27.491 16:01:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.491 16:01:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:27.491 16:01:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.491 16:01:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:27.491 16:01:57 -- common/autotest_common.sh@10 -- # set +x 00:09:27.748 [2024-04-15 16:01:57.473841] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:09:27.749 [2024-04-15 16:01:57.474161] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71953 ] 00:09:27.749 [2024-04-15 16:01:57.612061] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:27.749 [2024-04-15 16:01:57.664926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:27.749 [2024-04-15 16:01:57.664930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.006 16:01:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:28.006 16:01:57 -- common/autotest_common.sh@850 -- # return 0 00:09:28.006 16:01:57 -- spdkcli/tcp.sh@31 -- # socat_pid=71962 00:09:28.006 16:01:57 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:09:28.006 16:01:57 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:09:28.266 [ 00:09:28.266 "bdev_malloc_delete", 00:09:28.266 "bdev_malloc_create", 00:09:28.266 "bdev_null_resize", 00:09:28.266 "bdev_null_delete", 00:09:28.266 "bdev_null_create", 00:09:28.266 "bdev_nvme_cuse_unregister", 00:09:28.266 "bdev_nvme_cuse_register", 00:09:28.266 "bdev_opal_new_user", 00:09:28.266 "bdev_opal_set_lock_state", 00:09:28.266 "bdev_opal_delete", 00:09:28.266 "bdev_opal_get_info", 00:09:28.266 "bdev_opal_create", 00:09:28.266 "bdev_nvme_opal_revert", 00:09:28.266 "bdev_nvme_opal_init", 00:09:28.266 "bdev_nvme_send_cmd", 00:09:28.266 "bdev_nvme_get_path_iostat", 00:09:28.266 "bdev_nvme_get_mdns_discovery_info", 00:09:28.266 "bdev_nvme_stop_mdns_discovery", 00:09:28.266 "bdev_nvme_start_mdns_discovery", 00:09:28.266 "bdev_nvme_set_multipath_policy", 00:09:28.266 "bdev_nvme_set_preferred_path", 00:09:28.266 "bdev_nvme_get_io_paths", 00:09:28.266 "bdev_nvme_remove_error_injection", 00:09:28.266 "bdev_nvme_add_error_injection", 00:09:28.266 "bdev_nvme_get_discovery_info", 00:09:28.266 "bdev_nvme_stop_discovery", 00:09:28.266 "bdev_nvme_start_discovery", 00:09:28.266 "bdev_nvme_get_controller_health_info", 00:09:28.266 "bdev_nvme_disable_controller", 00:09:28.266 "bdev_nvme_enable_controller", 00:09:28.266 "bdev_nvme_reset_controller", 00:09:28.266 "bdev_nvme_get_transport_statistics", 00:09:28.266 "bdev_nvme_apply_firmware", 00:09:28.266 "bdev_nvme_detach_controller", 00:09:28.266 "bdev_nvme_get_controllers", 00:09:28.266 "bdev_nvme_attach_controller", 00:09:28.266 "bdev_nvme_set_hotplug", 00:09:28.266 "bdev_nvme_set_options", 00:09:28.266 "bdev_passthru_delete", 00:09:28.266 "bdev_passthru_create", 00:09:28.266 "bdev_lvol_grow_lvstore", 00:09:28.266 "bdev_lvol_get_lvols", 00:09:28.266 "bdev_lvol_get_lvstores", 00:09:28.266 "bdev_lvol_delete", 00:09:28.266 "bdev_lvol_set_read_only", 00:09:28.266 "bdev_lvol_resize", 00:09:28.266 "bdev_lvol_decouple_parent", 00:09:28.266 "bdev_lvol_inflate", 00:09:28.266 "bdev_lvol_rename", 00:09:28.266 "bdev_lvol_clone_bdev", 00:09:28.266 "bdev_lvol_clone", 00:09:28.266 "bdev_lvol_snapshot", 00:09:28.267 "bdev_lvol_create", 00:09:28.267 "bdev_lvol_delete_lvstore", 00:09:28.267 "bdev_lvol_rename_lvstore", 00:09:28.267 "bdev_lvol_create_lvstore", 00:09:28.267 "bdev_raid_set_options", 00:09:28.267 "bdev_raid_remove_base_bdev", 00:09:28.267 "bdev_raid_add_base_bdev", 00:09:28.267 "bdev_raid_delete", 00:09:28.267 "bdev_raid_create", 00:09:28.267 "bdev_raid_get_bdevs", 00:09:28.267 "bdev_error_inject_error", 00:09:28.267 "bdev_error_delete", 00:09:28.267 "bdev_error_create", 00:09:28.267 "bdev_split_delete", 00:09:28.267 "bdev_split_create", 00:09:28.267 "bdev_delay_delete", 00:09:28.267 "bdev_delay_create", 00:09:28.267 "bdev_delay_update_latency", 00:09:28.267 "bdev_zone_block_delete", 00:09:28.267 "bdev_zone_block_create", 00:09:28.267 "blobfs_create", 00:09:28.267 "blobfs_detect", 00:09:28.267 "blobfs_set_cache_size", 00:09:28.267 "bdev_aio_delete", 00:09:28.267 "bdev_aio_rescan", 00:09:28.267 "bdev_aio_create", 00:09:28.267 "bdev_ftl_set_property", 00:09:28.267 "bdev_ftl_get_properties", 00:09:28.267 "bdev_ftl_get_stats", 00:09:28.267 "bdev_ftl_unmap", 00:09:28.267 "bdev_ftl_unload", 00:09:28.267 "bdev_ftl_delete", 00:09:28.267 "bdev_ftl_load", 00:09:28.267 "bdev_ftl_create", 00:09:28.267 "bdev_virtio_attach_controller", 00:09:28.267 "bdev_virtio_scsi_get_devices", 00:09:28.267 "bdev_virtio_detach_controller", 00:09:28.267 "bdev_virtio_blk_set_hotplug", 00:09:28.267 "bdev_iscsi_delete", 00:09:28.267 "bdev_iscsi_create", 00:09:28.267 "bdev_iscsi_set_options", 00:09:28.267 "bdev_uring_delete", 00:09:28.267 "bdev_uring_rescan", 00:09:28.267 "bdev_uring_create", 00:09:28.267 "accel_error_inject_error", 00:09:28.267 "ioat_scan_accel_module", 00:09:28.267 "dsa_scan_accel_module", 00:09:28.267 "iaa_scan_accel_module", 00:09:28.267 "keyring_file_remove_key", 00:09:28.267 "keyring_file_add_key", 00:09:28.267 "iscsi_set_options", 00:09:28.267 "iscsi_get_auth_groups", 00:09:28.267 "iscsi_auth_group_remove_secret", 00:09:28.267 "iscsi_auth_group_add_secret", 00:09:28.267 "iscsi_delete_auth_group", 00:09:28.267 "iscsi_create_auth_group", 00:09:28.267 "iscsi_set_discovery_auth", 00:09:28.267 "iscsi_get_options", 00:09:28.267 "iscsi_target_node_request_logout", 00:09:28.267 "iscsi_target_node_set_redirect", 00:09:28.267 "iscsi_target_node_set_auth", 00:09:28.267 "iscsi_target_node_add_lun", 00:09:28.267 "iscsi_get_stats", 00:09:28.267 "iscsi_get_connections", 00:09:28.267 "iscsi_portal_group_set_auth", 00:09:28.267 "iscsi_start_portal_group", 00:09:28.267 "iscsi_delete_portal_group", 00:09:28.267 "iscsi_create_portal_group", 00:09:28.267 "iscsi_get_portal_groups", 00:09:28.267 "iscsi_delete_target_node", 00:09:28.267 "iscsi_target_node_remove_pg_ig_maps", 00:09:28.267 "iscsi_target_node_add_pg_ig_maps", 00:09:28.267 "iscsi_create_target_node", 00:09:28.267 "iscsi_get_target_nodes", 00:09:28.267 "iscsi_delete_initiator_group", 00:09:28.267 "iscsi_initiator_group_remove_initiators", 00:09:28.267 "iscsi_initiator_group_add_initiators", 00:09:28.267 "iscsi_create_initiator_group", 00:09:28.267 "iscsi_get_initiator_groups", 00:09:28.267 "nvmf_set_crdt", 00:09:28.267 "nvmf_set_config", 00:09:28.267 "nvmf_set_max_subsystems", 00:09:28.267 "nvmf_subsystem_get_listeners", 00:09:28.267 "nvmf_subsystem_get_qpairs", 00:09:28.267 "nvmf_subsystem_get_controllers", 00:09:28.267 "nvmf_get_stats", 00:09:28.267 "nvmf_get_transports", 00:09:28.267 "nvmf_create_transport", 00:09:28.267 "nvmf_get_targets", 00:09:28.267 "nvmf_delete_target", 00:09:28.267 "nvmf_create_target", 00:09:28.267 "nvmf_subsystem_allow_any_host", 00:09:28.267 "nvmf_subsystem_remove_host", 00:09:28.267 "nvmf_subsystem_add_host", 00:09:28.267 "nvmf_ns_remove_host", 00:09:28.267 "nvmf_ns_add_host", 00:09:28.267 "nvmf_subsystem_remove_ns", 00:09:28.267 "nvmf_subsystem_add_ns", 00:09:28.267 "nvmf_subsystem_listener_set_ana_state", 00:09:28.267 "nvmf_discovery_get_referrals", 00:09:28.267 "nvmf_discovery_remove_referral", 00:09:28.267 "nvmf_discovery_add_referral", 00:09:28.267 "nvmf_subsystem_remove_listener", 00:09:28.267 "nvmf_subsystem_add_listener", 00:09:28.267 "nvmf_delete_subsystem", 00:09:28.267 "nvmf_create_subsystem", 00:09:28.267 "nvmf_get_subsystems", 00:09:28.267 "env_dpdk_get_mem_stats", 00:09:28.267 "nbd_get_disks", 00:09:28.267 "nbd_stop_disk", 00:09:28.267 "nbd_start_disk", 00:09:28.267 "ublk_recover_disk", 00:09:28.267 "ublk_get_disks", 00:09:28.267 "ublk_stop_disk", 00:09:28.267 "ublk_start_disk", 00:09:28.267 "ublk_destroy_target", 00:09:28.267 "ublk_create_target", 00:09:28.267 "virtio_blk_create_transport", 00:09:28.267 "virtio_blk_get_transports", 00:09:28.267 "vhost_controller_set_coalescing", 00:09:28.267 "vhost_get_controllers", 00:09:28.267 "vhost_delete_controller", 00:09:28.267 "vhost_create_blk_controller", 00:09:28.267 "vhost_scsi_controller_remove_target", 00:09:28.267 "vhost_scsi_controller_add_target", 00:09:28.267 "vhost_start_scsi_controller", 00:09:28.267 "vhost_create_scsi_controller", 00:09:28.267 "thread_set_cpumask", 00:09:28.267 "framework_get_scheduler", 00:09:28.267 "framework_set_scheduler", 00:09:28.267 "framework_get_reactors", 00:09:28.267 "thread_get_io_channels", 00:09:28.267 "thread_get_pollers", 00:09:28.267 "thread_get_stats", 00:09:28.267 "framework_monitor_context_switch", 00:09:28.267 "spdk_kill_instance", 00:09:28.267 "log_enable_timestamps", 00:09:28.267 "log_get_flags", 00:09:28.267 "log_clear_flag", 00:09:28.267 "log_set_flag", 00:09:28.267 "log_get_level", 00:09:28.267 "log_set_level", 00:09:28.267 "log_get_print_level", 00:09:28.267 "log_set_print_level", 00:09:28.267 "framework_enable_cpumask_locks", 00:09:28.267 "framework_disable_cpumask_locks", 00:09:28.267 "framework_wait_init", 00:09:28.267 "framework_start_init", 00:09:28.267 "scsi_get_devices", 00:09:28.267 "bdev_get_histogram", 00:09:28.267 "bdev_enable_histogram", 00:09:28.267 "bdev_set_qos_limit", 00:09:28.267 "bdev_set_qd_sampling_period", 00:09:28.267 "bdev_get_bdevs", 00:09:28.267 "bdev_reset_iostat", 00:09:28.267 "bdev_get_iostat", 00:09:28.267 "bdev_examine", 00:09:28.267 "bdev_wait_for_examine", 00:09:28.267 "bdev_set_options", 00:09:28.267 "notify_get_notifications", 00:09:28.267 "notify_get_types", 00:09:28.267 "accel_get_stats", 00:09:28.267 "accel_set_options", 00:09:28.267 "accel_set_driver", 00:09:28.267 "accel_crypto_key_destroy", 00:09:28.267 "accel_crypto_keys_get", 00:09:28.267 "accel_crypto_key_create", 00:09:28.267 "accel_assign_opc", 00:09:28.267 "accel_get_module_info", 00:09:28.267 "accel_get_opc_assignments", 00:09:28.267 "vmd_rescan", 00:09:28.267 "vmd_remove_device", 00:09:28.267 "vmd_enable", 00:09:28.267 "sock_set_default_impl", 00:09:28.267 "sock_impl_set_options", 00:09:28.267 "sock_impl_get_options", 00:09:28.267 "iobuf_get_stats", 00:09:28.267 "iobuf_set_options", 00:09:28.267 "framework_get_pci_devices", 00:09:28.267 "framework_get_config", 00:09:28.267 "framework_get_subsystems", 00:09:28.267 "trace_get_info", 00:09:28.267 "trace_get_tpoint_group_mask", 00:09:28.267 "trace_disable_tpoint_group", 00:09:28.267 "trace_enable_tpoint_group", 00:09:28.267 "trace_clear_tpoint_mask", 00:09:28.267 "trace_set_tpoint_mask", 00:09:28.267 "keyring_get_keys", 00:09:28.267 "spdk_get_version", 00:09:28.267 "rpc_get_methods" 00:09:28.267 ] 00:09:28.267 16:01:58 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:09:28.267 16:01:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:28.267 16:01:58 -- common/autotest_common.sh@10 -- # set +x 00:09:28.267 16:01:58 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:28.267 16:01:58 -- spdkcli/tcp.sh@38 -- # killprocess 71953 00:09:28.267 16:01:58 -- common/autotest_common.sh@936 -- # '[' -z 71953 ']' 00:09:28.267 16:01:58 -- common/autotest_common.sh@940 -- # kill -0 71953 00:09:28.267 16:01:58 -- common/autotest_common.sh@941 -- # uname 00:09:28.267 16:01:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:28.267 16:01:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71953 00:09:28.267 16:01:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:28.267 16:01:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:28.267 16:01:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71953' 00:09:28.267 killing process with pid 71953 00:09:28.267 16:01:58 -- common/autotest_common.sh@955 -- # kill 71953 00:09:28.267 16:01:58 -- common/autotest_common.sh@960 -- # wait 71953 00:09:28.833 ************************************ 00:09:28.833 END TEST spdkcli_tcp 00:09:28.833 ************************************ 00:09:28.833 00:09:28.833 real 0m1.245s 00:09:28.833 user 0m2.124s 00:09:28.833 sys 0m0.424s 00:09:28.833 16:01:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:28.833 16:01:58 -- common/autotest_common.sh@10 -- # set +x 00:09:28.833 16:01:58 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:28.833 16:01:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:28.833 16:01:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:28.833 16:01:58 -- common/autotest_common.sh@10 -- # set +x 00:09:28.833 ************************************ 00:09:28.833 START TEST dpdk_mem_utility 00:09:28.833 ************************************ 00:09:28.833 16:01:58 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:28.833 * Looking for test storage... 00:09:28.833 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:09:28.833 16:01:58 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:28.833 16:01:58 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=72041 00:09:28.833 16:01:58 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 72041 00:09:28.833 16:01:58 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:28.833 16:01:58 -- common/autotest_common.sh@817 -- # '[' -z 72041 ']' 00:09:28.833 16:01:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.833 16:01:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:28.833 16:01:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.833 16:01:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:28.833 16:01:58 -- common/autotest_common.sh@10 -- # set +x 00:09:29.095 [2024-04-15 16:01:58.837433] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:09:29.095 [2024-04-15 16:01:58.837801] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72041 ] 00:09:29.095 [2024-04-15 16:01:58.976277] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.095 [2024-04-15 16:01:59.027109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.055 16:01:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:30.055 16:01:59 -- common/autotest_common.sh@850 -- # return 0 00:09:30.055 16:01:59 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:09:30.055 16:01:59 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:09:30.055 16:01:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.055 16:01:59 -- common/autotest_common.sh@10 -- # set +x 00:09:30.055 { 00:09:30.055 "filename": "/tmp/spdk_mem_dump.txt" 00:09:30.055 } 00:09:30.055 16:01:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.055 16:01:59 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:30.055 DPDK memory size 814.000000 MiB in 1 heap(s) 00:09:30.055 1 heaps totaling size 814.000000 MiB 00:09:30.055 size: 814.000000 MiB heap id: 0 00:09:30.055 end heaps---------- 00:09:30.055 8 mempools totaling size 598.116089 MiB 00:09:30.055 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:09:30.055 size: 158.602051 MiB name: PDU_data_out_Pool 00:09:30.055 size: 84.521057 MiB name: bdev_io_72041 00:09:30.055 size: 51.011292 MiB name: evtpool_72041 00:09:30.055 size: 50.003479 MiB name: msgpool_72041 00:09:30.055 size: 21.763794 MiB name: PDU_Pool 00:09:30.055 size: 19.513306 MiB name: SCSI_TASK_Pool 00:09:30.055 size: 0.026123 MiB name: Session_Pool 00:09:30.055 end mempools------- 00:09:30.055 6 memzones totaling size 4.142822 MiB 00:09:30.055 size: 1.000366 MiB name: RG_ring_0_72041 00:09:30.055 size: 1.000366 MiB name: RG_ring_1_72041 00:09:30.055 size: 1.000366 MiB name: RG_ring_4_72041 00:09:30.055 size: 1.000366 MiB name: RG_ring_5_72041 00:09:30.055 size: 0.125366 MiB name: RG_ring_2_72041 00:09:30.055 size: 0.015991 MiB name: RG_ring_3_72041 00:09:30.055 end memzones------- 00:09:30.055 16:01:59 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:09:30.055 heap id: 0 total size: 814.000000 MiB number of busy elements: 297 number of free elements: 15 00:09:30.055 list of free elements. size: 12.472473 MiB 00:09:30.055 element at address: 0x200000400000 with size: 1.999512 MiB 00:09:30.055 element at address: 0x200018e00000 with size: 0.999878 MiB 00:09:30.055 element at address: 0x200019000000 with size: 0.999878 MiB 00:09:30.055 element at address: 0x200003e00000 with size: 0.996277 MiB 00:09:30.055 element at address: 0x200031c00000 with size: 0.994446 MiB 00:09:30.055 element at address: 0x200013800000 with size: 0.978699 MiB 00:09:30.055 element at address: 0x200007000000 with size: 0.959839 MiB 00:09:30.055 element at address: 0x200019200000 with size: 0.936584 MiB 00:09:30.055 element at address: 0x200000200000 with size: 0.837219 MiB 00:09:30.055 element at address: 0x20001aa00000 with size: 0.563660 MiB 00:09:30.055 element at address: 0x20000b200000 with size: 0.488892 MiB 00:09:30.055 element at address: 0x200000800000 with size: 0.486145 MiB 00:09:30.055 element at address: 0x200019400000 with size: 0.485657 MiB 00:09:30.055 element at address: 0x200027e00000 with size: 0.395752 MiB 00:09:30.055 element at address: 0x200003a00000 with size: 0.350037 MiB 00:09:30.055 list of standard malloc elements. size: 199.264954 MiB 00:09:30.055 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:09:30.055 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:09:30.055 element at address: 0x200018efff80 with size: 1.000122 MiB 00:09:30.055 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:09:30.055 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:09:30.055 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:09:30.055 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:09:30.055 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:09:30.055 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:09:30.055 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:09:30.055 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:09:30.055 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:09:30.055 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:09:30.055 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:09:30.055 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:09:30.055 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:09:30.055 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:09:30.055 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:09:30.055 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:09:30.055 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:09:30.055 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:09:30.055 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:09:30.055 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:09:30.055 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:09:30.055 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:09:30.055 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:09:30.055 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:09:30.055 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:09:30.055 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:09:30.055 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:09:30.055 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:09:30.055 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:09:30.055 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:09:30.055 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:09:30.055 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:09:30.055 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:09:30.055 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:09:30.055 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:09:30.055 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:09:30.055 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20000087c740 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20000087c800 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20000087c980 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:09:30.055 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:09:30.055 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:09:30.055 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:09:30.055 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:09:30.055 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:09:30.055 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:09:30.055 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:09:30.055 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:09:30.055 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:09:30.055 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:09:30.055 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:09:30.055 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:09:30.055 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:09:30.055 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:09:30.055 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:09:30.055 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:09:30.055 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:09:30.055 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:09:30.055 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:09:30.055 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:09:30.055 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:09:30.055 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:09:30.055 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:09:30.055 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:09:30.055 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:09:30.055 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:09:30.055 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:09:30.055 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:09:30.055 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:09:30.055 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:09:30.055 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:09:30.055 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:09:30.055 element at address: 0x200003adb300 with size: 0.000183 MiB 00:09:30.055 element at address: 0x200003adb500 with size: 0.000183 MiB 00:09:30.055 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:09:30.055 element at address: 0x200003affa80 with size: 0.000183 MiB 00:09:30.055 element at address: 0x200003affb40 with size: 0.000183 MiB 00:09:30.055 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:09:30.055 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:09:30.055 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:09:30.055 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:09:30.055 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:09:30.055 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa904c0 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa90580 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa90640 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa90700 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa907c0 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa90880 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa90940 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa90a00 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa90ac0 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa90b80 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa90c40 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa90d00 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa90dc0 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa90e80 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa90f40 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa91000 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa910c0 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa91180 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa91240 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa91300 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa913c0 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa91480 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa91540 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa91600 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa916c0 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa91780 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa91840 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:09:30.055 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:09:30.056 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:09:30.056 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:09:30.056 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:09:30.056 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:09:30.056 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:09:30.056 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:09:30.056 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:09:30.056 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:09:30.056 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:09:30.056 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:09:30.056 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:09:30.056 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:09:30.056 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:09:30.056 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:09:30.056 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:09:30.056 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:09:30.056 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:09:30.056 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:09:30.056 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:09:30.056 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:09:30.056 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:09:30.056 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:09:30.056 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:09:30.056 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:09:30.056 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:09:30.056 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:09:30.056 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:09:30.056 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e65500 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:09:30.056 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:09:30.056 list of memzone associated elements. size: 602.262573 MiB 00:09:30.056 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:09:30.056 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:09:30.056 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:09:30.056 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:09:30.056 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:09:30.056 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_72041_0 00:09:30.056 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:09:30.056 associated memzone info: size: 48.002930 MiB name: MP_evtpool_72041_0 00:09:30.056 element at address: 0x200003fff380 with size: 48.003052 MiB 00:09:30.056 associated memzone info: size: 48.002930 MiB name: MP_msgpool_72041_0 00:09:30.056 element at address: 0x2000195be940 with size: 20.255554 MiB 00:09:30.056 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:09:30.056 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:09:30.056 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:09:30.056 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:09:30.056 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_72041 00:09:30.056 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:09:30.056 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_72041 00:09:30.056 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:09:30.056 associated memzone info: size: 1.007996 MiB name: MP_evtpool_72041 00:09:30.056 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:09:30.056 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:09:30.056 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:09:30.056 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:09:30.056 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:09:30.056 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:09:30.056 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:09:30.056 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:09:30.056 element at address: 0x200003eff180 with size: 1.000488 MiB 00:09:30.056 associated memzone info: size: 1.000366 MiB name: RG_ring_0_72041 00:09:30.056 element at address: 0x200003affc00 with size: 1.000488 MiB 00:09:30.056 associated memzone info: size: 1.000366 MiB name: RG_ring_1_72041 00:09:30.056 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:09:30.056 associated memzone info: size: 1.000366 MiB name: RG_ring_4_72041 00:09:30.056 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:09:30.056 associated memzone info: size: 1.000366 MiB name: RG_ring_5_72041 00:09:30.056 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:09:30.056 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_72041 00:09:30.056 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:09:30.056 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:09:30.056 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:09:30.056 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:09:30.056 element at address: 0x20001947c540 with size: 0.250488 MiB 00:09:30.056 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:09:30.056 element at address: 0x200003adf880 with size: 0.125488 MiB 00:09:30.056 associated memzone info: size: 0.125366 MiB name: RG_ring_2_72041 00:09:30.056 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:09:30.056 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:09:30.056 element at address: 0x200027e65680 with size: 0.023743 MiB 00:09:30.056 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:09:30.056 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:09:30.056 associated memzone info: size: 0.015991 MiB name: RG_ring_3_72041 00:09:30.056 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:09:30.056 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:09:30.056 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:09:30.056 associated memzone info: size: 0.000183 MiB name: MP_msgpool_72041 00:09:30.056 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:09:30.056 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_72041 00:09:30.056 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:09:30.056 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:09:30.056 16:01:59 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:09:30.056 16:01:59 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 72041 00:09:30.056 16:01:59 -- common/autotest_common.sh@936 -- # '[' -z 72041 ']' 00:09:30.056 16:01:59 -- common/autotest_common.sh@940 -- # kill -0 72041 00:09:30.056 16:01:59 -- common/autotest_common.sh@941 -- # uname 00:09:30.056 16:01:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:30.056 16:01:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72041 00:09:30.056 16:01:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:30.056 16:01:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:30.056 16:01:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72041' 00:09:30.056 killing process with pid 72041 00:09:30.056 16:01:59 -- common/autotest_common.sh@955 -- # kill 72041 00:09:30.056 16:01:59 -- common/autotest_common.sh@960 -- # wait 72041 00:09:30.622 00:09:30.622 real 0m1.653s 00:09:30.622 user 0m1.801s 00:09:30.622 sys 0m0.429s 00:09:30.622 16:02:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:30.622 ************************************ 00:09:30.622 END TEST dpdk_mem_utility 00:09:30.622 ************************************ 00:09:30.622 16:02:00 -- common/autotest_common.sh@10 -- # set +x 00:09:30.622 16:02:00 -- spdk/autotest.sh@177 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:30.622 16:02:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:30.622 16:02:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:30.622 16:02:00 -- common/autotest_common.sh@10 -- # set +x 00:09:30.622 ************************************ 00:09:30.622 START TEST event 00:09:30.622 ************************************ 00:09:30.622 16:02:00 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:30.622 * Looking for test storage... 00:09:30.622 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:30.622 16:02:00 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:30.622 16:02:00 -- bdev/nbd_common.sh@6 -- # set -e 00:09:30.622 16:02:00 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:30.622 16:02:00 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:09:30.622 16:02:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:30.622 16:02:00 -- common/autotest_common.sh@10 -- # set +x 00:09:30.880 ************************************ 00:09:30.880 START TEST event_perf 00:09:30.880 ************************************ 00:09:30.880 16:02:00 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:30.880 Running I/O for 1 seconds...[2024-04-15 16:02:00.659444] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:09:30.880 [2024-04-15 16:02:00.659945] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72126 ] 00:09:30.880 [2024-04-15 16:02:00.806702] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:31.137 [2024-04-15 16:02:00.865381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:31.137 [2024-04-15 16:02:00.865494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:31.137 [2024-04-15 16:02:00.865613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.137 [2024-04-15 16:02:00.865613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:31.137 [2024-04-15 16:02:00.867881] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:09:32.068 Running I/O for 1 seconds... 00:09:32.068 lcore 0: 158694 00:09:32.068 lcore 1: 158694 00:09:32.068 lcore 2: 158696 00:09:32.068 lcore 3: 158695 00:09:32.068 done. 00:09:32.068 ************************************ 00:09:32.068 END TEST event_perf 00:09:32.068 ************************************ 00:09:32.068 00:09:32.068 real 0m1.300s 00:09:32.068 user 0m4.091s 00:09:32.068 sys 0m0.068s 00:09:32.068 16:02:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:32.068 16:02:01 -- common/autotest_common.sh@10 -- # set +x 00:09:32.068 16:02:01 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:09:32.068 16:02:01 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:09:32.068 16:02:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:32.068 16:02:01 -- common/autotest_common.sh@10 -- # set +x 00:09:32.326 ************************************ 00:09:32.326 START TEST event_reactor 00:09:32.326 ************************************ 00:09:32.326 16:02:02 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:09:32.326 [2024-04-15 16:02:02.078136] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:09:32.326 [2024-04-15 16:02:02.078476] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72172 ] 00:09:32.326 [2024-04-15 16:02:02.226291] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.326 [2024-04-15 16:02:02.282409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.326 [2024-04-15 16:02:02.282737] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:09:33.702 test_start 00:09:33.702 oneshot 00:09:33.702 tick 100 00:09:33.702 tick 100 00:09:33.702 tick 250 00:09:33.702 tick 100 00:09:33.702 tick 100 00:09:33.702 tick 100 00:09:33.702 tick 250 00:09:33.702 tick 500 00:09:33.702 tick 100 00:09:33.702 tick 100 00:09:33.702 tick 250 00:09:33.702 tick 100 00:09:33.702 tick 100 00:09:33.702 test_end 00:09:33.702 00:09:33.702 real 0m1.291s 00:09:33.702 user 0m1.123s 00:09:33.702 sys 0m0.059s 00:09:33.702 16:02:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:33.702 ************************************ 00:09:33.702 END TEST event_reactor 00:09:33.702 ************************************ 00:09:33.702 16:02:03 -- common/autotest_common.sh@10 -- # set +x 00:09:33.702 16:02:03 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:33.702 16:02:03 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:09:33.702 16:02:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:33.702 16:02:03 -- common/autotest_common.sh@10 -- # set +x 00:09:33.702 ************************************ 00:09:33.702 START TEST event_reactor_perf 00:09:33.702 ************************************ 00:09:33.702 16:02:03 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:33.702 [2024-04-15 16:02:03.502669] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:09:33.702 [2024-04-15 16:02:03.502977] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72206 ] 00:09:33.702 [2024-04-15 16:02:03.649650] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.960 [2024-04-15 16:02:03.705320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.960 [2024-04-15 16:02:03.705643] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:09:34.895 test_start 00:09:34.895 test_end 00:09:34.895 Performance: 356239 events per second 00:09:34.895 ************************************ 00:09:34.895 END TEST event_reactor_perf 00:09:34.895 ************************************ 00:09:34.895 00:09:34.895 real 0m1.289s 00:09:34.895 user 0m1.120s 00:09:34.895 sys 0m0.058s 00:09:34.895 16:02:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:34.895 16:02:04 -- common/autotest_common.sh@10 -- # set +x 00:09:34.895 16:02:04 -- event/event.sh@49 -- # uname -s 00:09:34.895 16:02:04 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:09:34.895 16:02:04 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:09:34.895 16:02:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:34.895 16:02:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:34.895 16:02:04 -- common/autotest_common.sh@10 -- # set +x 00:09:35.153 ************************************ 00:09:35.153 START TEST event_scheduler 00:09:35.153 ************************************ 00:09:35.153 16:02:04 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:09:35.153 * Looking for test storage... 00:09:35.153 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:09:35.153 16:02:04 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:09:35.153 16:02:05 -- scheduler/scheduler.sh@35 -- # scheduler_pid=72272 00:09:35.153 16:02:05 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:09:35.153 16:02:05 -- scheduler/scheduler.sh@37 -- # waitforlisten 72272 00:09:35.153 16:02:05 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:09:35.153 16:02:05 -- common/autotest_common.sh@817 -- # '[' -z 72272 ']' 00:09:35.153 16:02:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.153 16:02:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:35.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.153 16:02:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.153 16:02:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:35.153 16:02:05 -- common/autotest_common.sh@10 -- # set +x 00:09:35.153 [2024-04-15 16:02:05.052111] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:09:35.153 [2024-04-15 16:02:05.052444] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72272 ] 00:09:35.411 [2024-04-15 16:02:05.200949] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:35.411 [2024-04-15 16:02:05.261151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.411 [2024-04-15 16:02:05.261242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:35.411 [2024-04-15 16:02:05.261341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:35.411 [2024-04-15 16:02:05.261344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:36.429 16:02:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:36.429 16:02:06 -- common/autotest_common.sh@850 -- # return 0 00:09:36.429 16:02:06 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:09:36.429 16:02:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:36.429 16:02:06 -- common/autotest_common.sh@10 -- # set +x 00:09:36.429 POWER: Env isn't set yet! 00:09:36.429 POWER: Attempting to initialise ACPI cpufreq power management... 00:09:36.429 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:36.429 POWER: Cannot set governor of lcore 0 to userspace 00:09:36.429 POWER: Attempting to initialise PSTAT power management... 00:09:36.429 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:36.429 POWER: Cannot set governor of lcore 0 to performance 00:09:36.429 POWER: Attempting to initialise CPPC power management... 00:09:36.430 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:36.430 POWER: Cannot set governor of lcore 0 to userspace 00:09:36.430 POWER: Attempting to initialise VM power management... 00:09:36.430 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:09:36.430 POWER: Unable to set Power Management Environment for lcore 0 00:09:36.430 [2024-04-15 16:02:06.014750] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:09:36.430 [2024-04-15 16:02:06.014910] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:09:36.430 [2024-04-15 16:02:06.015055] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:09:36.430 16:02:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:36.430 16:02:06 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:09:36.430 16:02:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:36.430 16:02:06 -- common/autotest_common.sh@10 -- # set +x 00:09:36.430 [2024-04-15 16:02:06.082063] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:09:36.430 16:02:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:36.430 16:02:06 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:09:36.430 16:02:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:36.430 16:02:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:36.430 16:02:06 -- common/autotest_common.sh@10 -- # set +x 00:09:36.430 ************************************ 00:09:36.430 START TEST scheduler_create_thread 00:09:36.430 ************************************ 00:09:36.430 16:02:06 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:09:36.430 16:02:06 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:09:36.430 16:02:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:36.430 16:02:06 -- common/autotest_common.sh@10 -- # set +x 00:09:36.430 2 00:09:36.430 16:02:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:36.430 16:02:06 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:09:36.430 16:02:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:36.430 16:02:06 -- common/autotest_common.sh@10 -- # set +x 00:09:36.430 3 00:09:36.430 16:02:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:36.430 16:02:06 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:09:36.430 16:02:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:36.430 16:02:06 -- common/autotest_common.sh@10 -- # set +x 00:09:36.430 4 00:09:36.430 16:02:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:36.430 16:02:06 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:09:36.430 16:02:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:36.430 16:02:06 -- common/autotest_common.sh@10 -- # set +x 00:09:36.430 5 00:09:36.430 16:02:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:36.430 16:02:06 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:09:36.430 16:02:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:36.430 16:02:06 -- common/autotest_common.sh@10 -- # set +x 00:09:36.430 6 00:09:36.430 16:02:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:36.430 16:02:06 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:09:36.430 16:02:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:36.430 16:02:06 -- common/autotest_common.sh@10 -- # set +x 00:09:36.430 7 00:09:36.430 16:02:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:36.430 16:02:06 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:09:36.430 16:02:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:36.430 16:02:06 -- common/autotest_common.sh@10 -- # set +x 00:09:36.430 8 00:09:36.430 16:02:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:36.430 16:02:06 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:09:36.430 16:02:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:36.430 16:02:06 -- common/autotest_common.sh@10 -- # set +x 00:09:36.430 9 00:09:36.430 16:02:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:36.430 16:02:06 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:09:36.430 16:02:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:36.430 16:02:06 -- common/autotest_common.sh@10 -- # set +x 00:09:36.430 10 00:09:36.430 16:02:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:36.430 16:02:06 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:09:36.430 16:02:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:36.430 16:02:06 -- common/autotest_common.sh@10 -- # set +x 00:09:36.430 16:02:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:36.430 16:02:06 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:09:36.430 16:02:06 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:09:36.430 16:02:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:36.430 16:02:06 -- common/autotest_common.sh@10 -- # set +x 00:09:36.430 16:02:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:36.430 16:02:06 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:09:36.430 16:02:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:36.430 16:02:06 -- common/autotest_common.sh@10 -- # set +x 00:09:37.806 16:02:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:37.806 16:02:07 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:09:37.806 16:02:07 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:09:37.806 16:02:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:37.806 16:02:07 -- common/autotest_common.sh@10 -- # set +x 00:09:39.180 ************************************ 00:09:39.180 END TEST scheduler_create_thread 00:09:39.180 ************************************ 00:09:39.180 16:02:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:39.180 00:09:39.180 real 0m2.612s 00:09:39.180 user 0m0.019s 00:09:39.180 sys 0m0.008s 00:09:39.180 16:02:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:39.180 16:02:08 -- common/autotest_common.sh@10 -- # set +x 00:09:39.180 16:02:08 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:09:39.180 16:02:08 -- scheduler/scheduler.sh@46 -- # killprocess 72272 00:09:39.180 16:02:08 -- common/autotest_common.sh@936 -- # '[' -z 72272 ']' 00:09:39.180 16:02:08 -- common/autotest_common.sh@940 -- # kill -0 72272 00:09:39.180 16:02:08 -- common/autotest_common.sh@941 -- # uname 00:09:39.180 16:02:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:39.180 16:02:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72272 00:09:39.180 killing process with pid 72272 00:09:39.180 16:02:08 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:09:39.180 16:02:08 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:09:39.180 16:02:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72272' 00:09:39.180 16:02:08 -- common/autotest_common.sh@955 -- # kill 72272 00:09:39.180 16:02:08 -- common/autotest_common.sh@960 -- # wait 72272 00:09:39.447 [2024-04-15 16:02:09.254224] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:09:39.705 00:09:39.705 real 0m4.542s 00:09:39.705 user 0m8.661s 00:09:39.705 sys 0m0.403s 00:09:39.705 16:02:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:39.705 16:02:09 -- common/autotest_common.sh@10 -- # set +x 00:09:39.705 ************************************ 00:09:39.705 END TEST event_scheduler 00:09:39.705 ************************************ 00:09:39.705 16:02:09 -- event/event.sh@51 -- # modprobe -n nbd 00:09:39.705 16:02:09 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:09:39.705 16:02:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:39.705 16:02:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:39.706 16:02:09 -- common/autotest_common.sh@10 -- # set +x 00:09:39.706 ************************************ 00:09:39.706 START TEST app_repeat 00:09:39.706 ************************************ 00:09:39.706 16:02:09 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:09:39.706 16:02:09 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:39.706 16:02:09 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:39.706 16:02:09 -- event/event.sh@13 -- # local nbd_list 00:09:39.706 16:02:09 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:39.706 16:02:09 -- event/event.sh@14 -- # local bdev_list 00:09:39.706 16:02:09 -- event/event.sh@15 -- # local repeat_times=4 00:09:39.706 16:02:09 -- event/event.sh@17 -- # modprobe nbd 00:09:39.706 16:02:09 -- event/event.sh@19 -- # repeat_pid=72380 00:09:39.706 16:02:09 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:09:39.706 16:02:09 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:09:39.706 Process app_repeat pid: 72380 00:09:39.706 16:02:09 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 72380' 00:09:39.706 16:02:09 -- event/event.sh@23 -- # for i in {0..2} 00:09:39.706 16:02:09 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:09:39.706 spdk_app_start Round 0 00:09:39.706 16:02:09 -- event/event.sh@25 -- # waitforlisten 72380 /var/tmp/spdk-nbd.sock 00:09:39.706 16:02:09 -- common/autotest_common.sh@817 -- # '[' -z 72380 ']' 00:09:39.706 16:02:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:39.706 16:02:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:39.706 16:02:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:39.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:39.706 16:02:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:39.706 16:02:09 -- common/autotest_common.sh@10 -- # set +x 00:09:39.706 [2024-04-15 16:02:09.607174] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:09:39.706 [2024-04-15 16:02:09.607467] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72380 ] 00:09:39.963 [2024-04-15 16:02:09.746596] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:39.963 [2024-04-15 16:02:09.807768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:39.963 [2024-04-15 16:02:09.807779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.963 16:02:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:39.963 16:02:09 -- common/autotest_common.sh@850 -- # return 0 00:09:39.963 16:02:09 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:40.220 Malloc0 00:09:40.220 16:02:10 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:40.785 Malloc1 00:09:40.785 16:02:10 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:40.785 16:02:10 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:40.785 16:02:10 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:40.785 16:02:10 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:40.785 16:02:10 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:40.785 16:02:10 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:40.785 16:02:10 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:40.785 16:02:10 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:40.786 16:02:10 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:40.786 16:02:10 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:40.786 16:02:10 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:40.786 16:02:10 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:40.786 16:02:10 -- bdev/nbd_common.sh@12 -- # local i 00:09:40.786 16:02:10 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:40.786 16:02:10 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:40.786 16:02:10 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:40.786 /dev/nbd0 00:09:40.786 16:02:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:40.786 16:02:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:40.786 16:02:10 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:09:40.786 16:02:10 -- common/autotest_common.sh@855 -- # local i 00:09:40.786 16:02:10 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:09:40.786 16:02:10 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:09:40.786 16:02:10 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:09:40.786 16:02:10 -- common/autotest_common.sh@859 -- # break 00:09:40.786 16:02:10 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:40.786 16:02:10 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:40.786 16:02:10 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:40.786 1+0 records in 00:09:40.786 1+0 records out 00:09:40.786 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000477399 s, 8.6 MB/s 00:09:40.786 16:02:10 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:40.786 16:02:10 -- common/autotest_common.sh@872 -- # size=4096 00:09:40.786 16:02:10 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:40.786 16:02:10 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:09:40.786 16:02:10 -- common/autotest_common.sh@875 -- # return 0 00:09:40.786 16:02:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:40.786 16:02:10 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:40.786 16:02:10 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:41.042 /dev/nbd1 00:09:41.042 16:02:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:41.042 16:02:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:41.042 16:02:10 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:09:41.042 16:02:10 -- common/autotest_common.sh@855 -- # local i 00:09:41.042 16:02:10 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:09:41.042 16:02:10 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:09:41.042 16:02:10 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:09:41.042 16:02:10 -- common/autotest_common.sh@859 -- # break 00:09:41.042 16:02:10 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:41.042 16:02:10 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:41.042 16:02:10 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:41.042 1+0 records in 00:09:41.042 1+0 records out 00:09:41.042 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000482313 s, 8.5 MB/s 00:09:41.042 16:02:10 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:41.042 16:02:10 -- common/autotest_common.sh@872 -- # size=4096 00:09:41.042 16:02:10 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:41.042 16:02:10 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:09:41.042 16:02:10 -- common/autotest_common.sh@875 -- # return 0 00:09:41.042 16:02:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:41.042 16:02:10 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:41.042 16:02:10 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:41.042 16:02:10 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:41.042 16:02:10 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:41.607 16:02:11 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:41.607 { 00:09:41.607 "nbd_device": "/dev/nbd0", 00:09:41.607 "bdev_name": "Malloc0" 00:09:41.607 }, 00:09:41.607 { 00:09:41.607 "nbd_device": "/dev/nbd1", 00:09:41.607 "bdev_name": "Malloc1" 00:09:41.607 } 00:09:41.607 ]' 00:09:41.607 16:02:11 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:41.607 { 00:09:41.608 "nbd_device": "/dev/nbd0", 00:09:41.608 "bdev_name": "Malloc0" 00:09:41.608 }, 00:09:41.608 { 00:09:41.608 "nbd_device": "/dev/nbd1", 00:09:41.608 "bdev_name": "Malloc1" 00:09:41.608 } 00:09:41.608 ]' 00:09:41.608 16:02:11 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:41.608 16:02:11 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:41.608 /dev/nbd1' 00:09:41.608 16:02:11 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:41.608 16:02:11 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:41.608 /dev/nbd1' 00:09:41.608 16:02:11 -- bdev/nbd_common.sh@65 -- # count=2 00:09:41.608 16:02:11 -- bdev/nbd_common.sh@66 -- # echo 2 00:09:41.608 16:02:11 -- bdev/nbd_common.sh@95 -- # count=2 00:09:41.608 16:02:11 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:41.608 16:02:11 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:41.608 16:02:11 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:41.608 16:02:11 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:41.608 16:02:11 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:41.608 16:02:11 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:41.608 16:02:11 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:41.608 16:02:11 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:41.608 256+0 records in 00:09:41.608 256+0 records out 00:09:41.608 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.011406 s, 91.9 MB/s 00:09:41.608 16:02:11 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:41.608 16:02:11 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:41.608 256+0 records in 00:09:41.608 256+0 records out 00:09:41.608 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0264419 s, 39.7 MB/s 00:09:41.608 16:02:11 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:41.608 16:02:11 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:41.608 256+0 records in 00:09:41.608 256+0 records out 00:09:41.608 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0325032 s, 32.3 MB/s 00:09:41.608 16:02:11 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:41.608 16:02:11 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:41.608 16:02:11 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:41.608 16:02:11 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:41.608 16:02:11 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:41.608 16:02:11 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:41.608 16:02:11 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:41.608 16:02:11 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:41.608 16:02:11 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:41.608 16:02:11 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:41.608 16:02:11 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:41.608 16:02:11 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:41.608 16:02:11 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:41.608 16:02:11 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:41.608 16:02:11 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:41.608 16:02:11 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:41.608 16:02:11 -- bdev/nbd_common.sh@51 -- # local i 00:09:41.608 16:02:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:41.608 16:02:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:41.865 16:02:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:41.865 16:02:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:41.865 16:02:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:41.865 16:02:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:41.865 16:02:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:41.865 16:02:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:41.865 16:02:11 -- bdev/nbd_common.sh@41 -- # break 00:09:41.865 16:02:11 -- bdev/nbd_common.sh@45 -- # return 0 00:09:41.865 16:02:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:41.865 16:02:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:42.142 16:02:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:42.142 16:02:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:42.142 16:02:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:42.142 16:02:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:42.142 16:02:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:42.142 16:02:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:42.142 16:02:11 -- bdev/nbd_common.sh@41 -- # break 00:09:42.142 16:02:11 -- bdev/nbd_common.sh@45 -- # return 0 00:09:42.142 16:02:11 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:42.142 16:02:11 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:42.142 16:02:11 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:42.399 16:02:12 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:42.399 16:02:12 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:42.399 16:02:12 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:42.399 16:02:12 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:42.399 16:02:12 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:42.399 16:02:12 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:42.399 16:02:12 -- bdev/nbd_common.sh@65 -- # true 00:09:42.400 16:02:12 -- bdev/nbd_common.sh@65 -- # count=0 00:09:42.400 16:02:12 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:42.400 16:02:12 -- bdev/nbd_common.sh@104 -- # count=0 00:09:42.400 16:02:12 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:42.400 16:02:12 -- bdev/nbd_common.sh@109 -- # return 0 00:09:42.400 16:02:12 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:42.966 16:02:12 -- event/event.sh@35 -- # sleep 3 00:09:42.966 [2024-04-15 16:02:12.806364] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:42.966 [2024-04-15 16:02:12.855724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:42.966 [2024-04-15 16:02:12.855732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.966 [2024-04-15 16:02:12.899886] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:42.966 [2024-04-15 16:02:12.900215] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:46.249 spdk_app_start Round 1 00:09:46.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:46.249 16:02:15 -- event/event.sh@23 -- # for i in {0..2} 00:09:46.249 16:02:15 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:09:46.249 16:02:15 -- event/event.sh@25 -- # waitforlisten 72380 /var/tmp/spdk-nbd.sock 00:09:46.249 16:02:15 -- common/autotest_common.sh@817 -- # '[' -z 72380 ']' 00:09:46.249 16:02:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:46.249 16:02:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:46.249 16:02:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:46.249 16:02:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:46.249 16:02:15 -- common/autotest_common.sh@10 -- # set +x 00:09:46.249 16:02:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:46.249 16:02:15 -- common/autotest_common.sh@850 -- # return 0 00:09:46.249 16:02:15 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:46.249 Malloc0 00:09:46.249 16:02:16 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:46.508 Malloc1 00:09:46.508 16:02:16 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:46.508 16:02:16 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:46.508 16:02:16 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:46.508 16:02:16 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:46.508 16:02:16 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:46.508 16:02:16 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:46.508 16:02:16 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:46.508 16:02:16 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:46.508 16:02:16 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:46.508 16:02:16 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:46.508 16:02:16 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:46.508 16:02:16 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:46.508 16:02:16 -- bdev/nbd_common.sh@12 -- # local i 00:09:46.508 16:02:16 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:46.508 16:02:16 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:46.508 16:02:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:46.766 /dev/nbd0 00:09:46.766 16:02:16 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:46.766 16:02:16 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:46.766 16:02:16 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:09:46.766 16:02:16 -- common/autotest_common.sh@855 -- # local i 00:09:46.766 16:02:16 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:09:46.766 16:02:16 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:09:46.766 16:02:16 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:09:46.766 16:02:16 -- common/autotest_common.sh@859 -- # break 00:09:46.766 16:02:16 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:46.766 16:02:16 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:46.766 16:02:16 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:46.766 1+0 records in 00:09:46.766 1+0 records out 00:09:46.766 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000350329 s, 11.7 MB/s 00:09:46.766 16:02:16 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:46.766 16:02:16 -- common/autotest_common.sh@872 -- # size=4096 00:09:46.766 16:02:16 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:46.766 16:02:16 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:09:46.766 16:02:16 -- common/autotest_common.sh@875 -- # return 0 00:09:46.766 16:02:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:46.766 16:02:16 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:46.766 16:02:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:47.025 /dev/nbd1 00:09:47.025 16:02:16 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:47.025 16:02:16 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:47.025 16:02:16 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:09:47.025 16:02:16 -- common/autotest_common.sh@855 -- # local i 00:09:47.025 16:02:16 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:09:47.025 16:02:16 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:09:47.025 16:02:16 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:09:47.025 16:02:16 -- common/autotest_common.sh@859 -- # break 00:09:47.025 16:02:16 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:47.025 16:02:16 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:47.025 16:02:16 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:47.025 1+0 records in 00:09:47.025 1+0 records out 00:09:47.025 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000310633 s, 13.2 MB/s 00:09:47.025 16:02:16 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:47.025 16:02:16 -- common/autotest_common.sh@872 -- # size=4096 00:09:47.025 16:02:16 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:47.025 16:02:16 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:09:47.025 16:02:16 -- common/autotest_common.sh@875 -- # return 0 00:09:47.025 16:02:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:47.025 16:02:16 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:47.025 16:02:16 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:47.025 16:02:16 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:47.025 16:02:16 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:47.288 16:02:17 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:47.288 { 00:09:47.288 "nbd_device": "/dev/nbd0", 00:09:47.288 "bdev_name": "Malloc0" 00:09:47.288 }, 00:09:47.288 { 00:09:47.288 "nbd_device": "/dev/nbd1", 00:09:47.288 "bdev_name": "Malloc1" 00:09:47.288 } 00:09:47.288 ]' 00:09:47.288 16:02:17 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:47.288 { 00:09:47.288 "nbd_device": "/dev/nbd0", 00:09:47.288 "bdev_name": "Malloc0" 00:09:47.288 }, 00:09:47.288 { 00:09:47.288 "nbd_device": "/dev/nbd1", 00:09:47.289 "bdev_name": "Malloc1" 00:09:47.289 } 00:09:47.289 ]' 00:09:47.289 16:02:17 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:47.289 16:02:17 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:47.289 /dev/nbd1' 00:09:47.289 16:02:17 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:47.289 16:02:17 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:47.289 /dev/nbd1' 00:09:47.289 16:02:17 -- bdev/nbd_common.sh@65 -- # count=2 00:09:47.289 16:02:17 -- bdev/nbd_common.sh@66 -- # echo 2 00:09:47.289 16:02:17 -- bdev/nbd_common.sh@95 -- # count=2 00:09:47.289 16:02:17 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:47.289 16:02:17 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:47.289 16:02:17 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:47.289 16:02:17 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:47.289 16:02:17 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:47.289 16:02:17 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:47.289 16:02:17 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:47.289 16:02:17 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:47.289 256+0 records in 00:09:47.289 256+0 records out 00:09:47.289 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00819625 s, 128 MB/s 00:09:47.289 16:02:17 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:47.289 16:02:17 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:47.289 256+0 records in 00:09:47.289 256+0 records out 00:09:47.289 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.021207 s, 49.4 MB/s 00:09:47.289 16:02:17 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:47.289 16:02:17 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:47.289 256+0 records in 00:09:47.289 256+0 records out 00:09:47.289 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0301617 s, 34.8 MB/s 00:09:47.289 16:02:17 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:47.289 16:02:17 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:47.289 16:02:17 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:47.289 16:02:17 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:47.289 16:02:17 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:47.289 16:02:17 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:47.289 16:02:17 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:47.289 16:02:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:47.289 16:02:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:47.289 16:02:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:47.289 16:02:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:47.289 16:02:17 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:47.289 16:02:17 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:47.289 16:02:17 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:47.289 16:02:17 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:47.289 16:02:17 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:47.289 16:02:17 -- bdev/nbd_common.sh@51 -- # local i 00:09:47.289 16:02:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:47.289 16:02:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:47.547 16:02:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:47.547 16:02:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:47.547 16:02:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:47.547 16:02:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:47.547 16:02:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:47.547 16:02:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:47.547 16:02:17 -- bdev/nbd_common.sh@41 -- # break 00:09:47.547 16:02:17 -- bdev/nbd_common.sh@45 -- # return 0 00:09:47.547 16:02:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:47.547 16:02:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:47.806 16:02:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:47.806 16:02:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:47.806 16:02:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:47.806 16:02:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:47.806 16:02:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:47.806 16:02:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:47.806 16:02:17 -- bdev/nbd_common.sh@41 -- # break 00:09:47.806 16:02:17 -- bdev/nbd_common.sh@45 -- # return 0 00:09:47.806 16:02:17 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:47.806 16:02:17 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:47.806 16:02:17 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:48.065 16:02:17 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:48.065 16:02:17 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:48.065 16:02:17 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:48.065 16:02:17 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:48.065 16:02:17 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:48.065 16:02:17 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:48.065 16:02:17 -- bdev/nbd_common.sh@65 -- # true 00:09:48.065 16:02:17 -- bdev/nbd_common.sh@65 -- # count=0 00:09:48.065 16:02:17 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:48.065 16:02:17 -- bdev/nbd_common.sh@104 -- # count=0 00:09:48.065 16:02:17 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:48.065 16:02:17 -- bdev/nbd_common.sh@109 -- # return 0 00:09:48.065 16:02:17 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:48.324 16:02:18 -- event/event.sh@35 -- # sleep 3 00:09:48.582 [2024-04-15 16:02:18.415247] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:48.582 [2024-04-15 16:02:18.463791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:48.582 [2024-04-15 16:02:18.463798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.582 [2024-04-15 16:02:18.508767] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:48.582 [2024-04-15 16:02:18.508820] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:51.931 16:02:21 -- event/event.sh@23 -- # for i in {0..2} 00:09:51.931 spdk_app_start Round 2 00:09:51.931 16:02:21 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:09:51.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:51.931 16:02:21 -- event/event.sh@25 -- # waitforlisten 72380 /var/tmp/spdk-nbd.sock 00:09:51.931 16:02:21 -- common/autotest_common.sh@817 -- # '[' -z 72380 ']' 00:09:51.931 16:02:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:51.931 16:02:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:51.931 16:02:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:51.931 16:02:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:51.931 16:02:21 -- common/autotest_common.sh@10 -- # set +x 00:09:51.931 16:02:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:51.931 16:02:21 -- common/autotest_common.sh@850 -- # return 0 00:09:51.931 16:02:21 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:51.931 Malloc0 00:09:51.931 16:02:21 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:52.191 Malloc1 00:09:52.191 16:02:22 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:52.191 16:02:22 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:52.191 16:02:22 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:52.191 16:02:22 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:52.191 16:02:22 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:52.191 16:02:22 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:52.191 16:02:22 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:52.191 16:02:22 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:52.191 16:02:22 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:52.191 16:02:22 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:52.191 16:02:22 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:52.191 16:02:22 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:52.191 16:02:22 -- bdev/nbd_common.sh@12 -- # local i 00:09:52.191 16:02:22 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:52.191 16:02:22 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:52.191 16:02:22 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:52.447 /dev/nbd0 00:09:52.447 16:02:22 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:52.447 16:02:22 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:52.447 16:02:22 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:09:52.447 16:02:22 -- common/autotest_common.sh@855 -- # local i 00:09:52.447 16:02:22 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:09:52.447 16:02:22 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:09:52.447 16:02:22 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:09:52.447 16:02:22 -- common/autotest_common.sh@859 -- # break 00:09:52.447 16:02:22 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:52.447 16:02:22 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:52.447 16:02:22 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:52.447 1+0 records in 00:09:52.447 1+0 records out 00:09:52.447 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000356338 s, 11.5 MB/s 00:09:52.447 16:02:22 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:52.447 16:02:22 -- common/autotest_common.sh@872 -- # size=4096 00:09:52.447 16:02:22 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:52.447 16:02:22 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:09:52.447 16:02:22 -- common/autotest_common.sh@875 -- # return 0 00:09:52.447 16:02:22 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:52.447 16:02:22 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:52.447 16:02:22 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:52.706 /dev/nbd1 00:09:52.706 16:02:22 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:52.706 16:02:22 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:52.706 16:02:22 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:09:52.706 16:02:22 -- common/autotest_common.sh@855 -- # local i 00:09:52.706 16:02:22 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:09:52.706 16:02:22 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:09:52.706 16:02:22 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:09:52.706 16:02:22 -- common/autotest_common.sh@859 -- # break 00:09:52.706 16:02:22 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:52.706 16:02:22 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:52.706 16:02:22 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:52.706 1+0 records in 00:09:52.706 1+0 records out 00:09:52.706 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000223958 s, 18.3 MB/s 00:09:52.706 16:02:22 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:52.706 16:02:22 -- common/autotest_common.sh@872 -- # size=4096 00:09:52.706 16:02:22 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:52.706 16:02:22 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:09:52.706 16:02:22 -- common/autotest_common.sh@875 -- # return 0 00:09:52.706 16:02:22 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:52.706 16:02:22 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:52.706 16:02:22 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:52.706 16:02:22 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:52.706 16:02:22 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:52.984 16:02:22 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:52.984 { 00:09:52.984 "nbd_device": "/dev/nbd0", 00:09:52.984 "bdev_name": "Malloc0" 00:09:52.984 }, 00:09:52.984 { 00:09:52.984 "nbd_device": "/dev/nbd1", 00:09:52.984 "bdev_name": "Malloc1" 00:09:52.984 } 00:09:52.984 ]' 00:09:52.984 16:02:22 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:52.984 { 00:09:52.984 "nbd_device": "/dev/nbd0", 00:09:52.984 "bdev_name": "Malloc0" 00:09:52.984 }, 00:09:52.984 { 00:09:52.984 "nbd_device": "/dev/nbd1", 00:09:52.984 "bdev_name": "Malloc1" 00:09:52.984 } 00:09:52.984 ]' 00:09:52.984 16:02:22 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:52.984 16:02:22 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:52.984 /dev/nbd1' 00:09:52.984 16:02:22 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:52.984 /dev/nbd1' 00:09:52.984 16:02:22 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:52.984 16:02:22 -- bdev/nbd_common.sh@65 -- # count=2 00:09:52.984 16:02:22 -- bdev/nbd_common.sh@66 -- # echo 2 00:09:52.984 16:02:22 -- bdev/nbd_common.sh@95 -- # count=2 00:09:52.984 16:02:22 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:52.984 16:02:22 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:52.984 16:02:22 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:52.984 16:02:22 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:52.984 16:02:22 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:52.984 16:02:22 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:52.984 16:02:22 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:52.984 16:02:22 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:53.265 256+0 records in 00:09:53.265 256+0 records out 00:09:53.265 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00695544 s, 151 MB/s 00:09:53.265 16:02:22 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:53.265 16:02:22 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:53.265 256+0 records in 00:09:53.265 256+0 records out 00:09:53.265 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0206631 s, 50.7 MB/s 00:09:53.265 16:02:22 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:53.265 16:02:22 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:53.265 256+0 records in 00:09:53.265 256+0 records out 00:09:53.265 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0246028 s, 42.6 MB/s 00:09:53.265 16:02:22 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:53.265 16:02:22 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:53.265 16:02:22 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:53.265 16:02:22 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:53.265 16:02:22 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:53.265 16:02:22 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:53.265 16:02:22 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:53.265 16:02:22 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:53.265 16:02:22 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:53.265 16:02:23 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:53.265 16:02:23 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:53.265 16:02:23 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:53.265 16:02:23 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:53.266 16:02:23 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:53.266 16:02:23 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:53.266 16:02:23 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:53.266 16:02:23 -- bdev/nbd_common.sh@51 -- # local i 00:09:53.266 16:02:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:53.266 16:02:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:53.524 16:02:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:53.524 16:02:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:53.524 16:02:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:53.524 16:02:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:53.524 16:02:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:53.524 16:02:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:53.524 16:02:23 -- bdev/nbd_common.sh@41 -- # break 00:09:53.524 16:02:23 -- bdev/nbd_common.sh@45 -- # return 0 00:09:53.524 16:02:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:53.524 16:02:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:53.781 16:02:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:53.781 16:02:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:53.781 16:02:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:53.781 16:02:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:53.781 16:02:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:53.781 16:02:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:53.782 16:02:23 -- bdev/nbd_common.sh@41 -- # break 00:09:53.782 16:02:23 -- bdev/nbd_common.sh@45 -- # return 0 00:09:53.782 16:02:23 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:53.782 16:02:23 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:53.782 16:02:23 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:54.040 16:02:23 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:54.040 16:02:23 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:54.040 16:02:23 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:54.040 16:02:23 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:54.040 16:02:23 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:54.040 16:02:23 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:54.040 16:02:23 -- bdev/nbd_common.sh@65 -- # true 00:09:54.040 16:02:23 -- bdev/nbd_common.sh@65 -- # count=0 00:09:54.040 16:02:23 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:54.040 16:02:23 -- bdev/nbd_common.sh@104 -- # count=0 00:09:54.040 16:02:23 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:54.040 16:02:23 -- bdev/nbd_common.sh@109 -- # return 0 00:09:54.040 16:02:23 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:54.606 16:02:24 -- event/event.sh@35 -- # sleep 3 00:09:54.606 [2024-04-15 16:02:24.484709] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:54.606 [2024-04-15 16:02:24.536329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:54.606 [2024-04-15 16:02:24.536334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.865 [2024-04-15 16:02:24.581411] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:54.865 [2024-04-15 16:02:24.581471] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:57.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:57.395 16:02:27 -- event/event.sh@38 -- # waitforlisten 72380 /var/tmp/spdk-nbd.sock 00:09:57.395 16:02:27 -- common/autotest_common.sh@817 -- # '[' -z 72380 ']' 00:09:57.395 16:02:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:57.396 16:02:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:57.396 16:02:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:57.396 16:02:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:57.396 16:02:27 -- common/autotest_common.sh@10 -- # set +x 00:09:57.963 16:02:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:57.963 16:02:27 -- common/autotest_common.sh@850 -- # return 0 00:09:57.963 16:02:27 -- event/event.sh@39 -- # killprocess 72380 00:09:57.964 16:02:27 -- common/autotest_common.sh@936 -- # '[' -z 72380 ']' 00:09:57.964 16:02:27 -- common/autotest_common.sh@940 -- # kill -0 72380 00:09:57.964 16:02:27 -- common/autotest_common.sh@941 -- # uname 00:09:57.964 16:02:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:57.964 16:02:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72380 00:09:57.964 killing process with pid 72380 00:09:57.964 16:02:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:57.964 16:02:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:57.964 16:02:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72380' 00:09:57.964 16:02:27 -- common/autotest_common.sh@955 -- # kill 72380 00:09:57.964 16:02:27 -- common/autotest_common.sh@960 -- # wait 72380 00:09:57.964 spdk_app_start is called in Round 0. 00:09:57.964 Shutdown signal received, stop current app iteration 00:09:57.964 Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 reinitialization... 00:09:57.964 spdk_app_start is called in Round 1. 00:09:57.964 Shutdown signal received, stop current app iteration 00:09:57.964 Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 reinitialization... 00:09:57.964 spdk_app_start is called in Round 2. 00:09:57.964 Shutdown signal received, stop current app iteration 00:09:57.964 Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 reinitialization... 00:09:57.964 spdk_app_start is called in Round 3. 00:09:57.964 Shutdown signal received, stop current app iteration 00:09:57.964 16:02:27 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:09:57.964 16:02:27 -- event/event.sh@42 -- # return 0 00:09:57.964 00:09:57.964 real 0m18.260s 00:09:57.964 user 0m40.683s 00:09:57.964 sys 0m3.133s 00:09:57.964 16:02:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:57.964 16:02:27 -- common/autotest_common.sh@10 -- # set +x 00:09:57.964 ************************************ 00:09:57.964 END TEST app_repeat 00:09:57.964 ************************************ 00:09:57.964 16:02:27 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:09:57.964 16:02:27 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:57.964 16:02:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:57.964 16:02:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:57.964 16:02:27 -- common/autotest_common.sh@10 -- # set +x 00:09:58.223 ************************************ 00:09:58.223 START TEST cpu_locks 00:09:58.223 ************************************ 00:09:58.223 16:02:27 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:58.223 * Looking for test storage... 00:09:58.223 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:58.223 16:02:28 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:09:58.223 16:02:28 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:09:58.223 16:02:28 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:09:58.223 16:02:28 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:09:58.223 16:02:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:58.223 16:02:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:58.223 16:02:28 -- common/autotest_common.sh@10 -- # set +x 00:09:58.223 ************************************ 00:09:58.223 START TEST default_locks 00:09:58.223 ************************************ 00:09:58.223 16:02:28 -- common/autotest_common.sh@1111 -- # default_locks 00:09:58.223 16:02:28 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=72818 00:09:58.223 16:02:28 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:58.223 16:02:28 -- event/cpu_locks.sh@47 -- # waitforlisten 72818 00:09:58.223 16:02:28 -- common/autotest_common.sh@817 -- # '[' -z 72818 ']' 00:09:58.223 16:02:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:58.223 16:02:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:58.223 16:02:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:58.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:58.223 16:02:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:58.223 16:02:28 -- common/autotest_common.sh@10 -- # set +x 00:09:58.223 [2024-04-15 16:02:28.187070] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:09:58.223 [2024-04-15 16:02:28.187171] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72818 ] 00:09:58.481 [2024-04-15 16:02:28.334484] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.481 [2024-04-15 16:02:28.390273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.416 16:02:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:59.416 16:02:29 -- common/autotest_common.sh@850 -- # return 0 00:09:59.416 16:02:29 -- event/cpu_locks.sh@49 -- # locks_exist 72818 00:09:59.416 16:02:29 -- event/cpu_locks.sh@22 -- # lslocks -p 72818 00:09:59.416 16:02:29 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:59.674 16:02:29 -- event/cpu_locks.sh@50 -- # killprocess 72818 00:09:59.674 16:02:29 -- common/autotest_common.sh@936 -- # '[' -z 72818 ']' 00:09:59.674 16:02:29 -- common/autotest_common.sh@940 -- # kill -0 72818 00:09:59.675 16:02:29 -- common/autotest_common.sh@941 -- # uname 00:09:59.675 16:02:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:59.675 16:02:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72818 00:09:59.675 16:02:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:59.675 16:02:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:59.675 16:02:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72818' 00:09:59.675 killing process with pid 72818 00:09:59.675 16:02:29 -- common/autotest_common.sh@955 -- # kill 72818 00:09:59.675 16:02:29 -- common/autotest_common.sh@960 -- # wait 72818 00:09:59.933 16:02:29 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 72818 00:09:59.933 16:02:29 -- common/autotest_common.sh@638 -- # local es=0 00:09:59.933 16:02:29 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 72818 00:09:59.933 16:02:29 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:09:59.933 16:02:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:59.933 16:02:29 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:09:59.933 16:02:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:59.933 16:02:29 -- common/autotest_common.sh@641 -- # waitforlisten 72818 00:09:59.933 16:02:29 -- common/autotest_common.sh@817 -- # '[' -z 72818 ']' 00:09:59.933 16:02:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.933 ERROR: process (pid: 72818) is no longer running 00:09:59.933 16:02:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:59.933 16:02:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.933 16:02:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:59.933 16:02:29 -- common/autotest_common.sh@10 -- # set +x 00:09:59.933 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (72818) - No such process 00:09:59.933 16:02:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:59.933 16:02:29 -- common/autotest_common.sh@850 -- # return 1 00:09:59.933 16:02:29 -- common/autotest_common.sh@641 -- # es=1 00:09:59.933 16:02:29 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:59.933 16:02:29 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:59.933 16:02:29 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:59.933 16:02:29 -- event/cpu_locks.sh@54 -- # no_locks 00:09:59.933 16:02:29 -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:59.933 16:02:29 -- event/cpu_locks.sh@26 -- # local lock_files 00:09:59.933 16:02:29 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:59.933 00:09:59.933 real 0m1.693s 00:09:59.933 user 0m1.741s 00:09:59.933 sys 0m0.539s 00:09:59.933 16:02:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:59.933 ************************************ 00:09:59.933 END TEST default_locks 00:09:59.933 ************************************ 00:09:59.933 16:02:29 -- common/autotest_common.sh@10 -- # set +x 00:09:59.933 16:02:29 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:09:59.933 16:02:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:59.933 16:02:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:59.933 16:02:29 -- common/autotest_common.sh@10 -- # set +x 00:10:00.211 ************************************ 00:10:00.211 START TEST default_locks_via_rpc 00:10:00.211 ************************************ 00:10:00.211 16:02:29 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:10:00.211 16:02:29 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=72874 00:10:00.211 16:02:29 -- event/cpu_locks.sh@63 -- # waitforlisten 72874 00:10:00.211 16:02:29 -- common/autotest_common.sh@817 -- # '[' -z 72874 ']' 00:10:00.211 16:02:29 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:00.211 16:02:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.211 16:02:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:00.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.212 16:02:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.212 16:02:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:00.212 16:02:29 -- common/autotest_common.sh@10 -- # set +x 00:10:00.212 [2024-04-15 16:02:29.990181] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:10:00.212 [2024-04-15 16:02:29.990296] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72874 ] 00:10:00.212 [2024-04-15 16:02:30.137355] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.469 [2024-04-15 16:02:30.200063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.036 16:02:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:01.036 16:02:30 -- common/autotest_common.sh@850 -- # return 0 00:10:01.036 16:02:30 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:10:01.036 16:02:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:01.036 16:02:30 -- common/autotest_common.sh@10 -- # set +x 00:10:01.036 16:02:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:01.036 16:02:30 -- event/cpu_locks.sh@67 -- # no_locks 00:10:01.036 16:02:30 -- event/cpu_locks.sh@26 -- # lock_files=() 00:10:01.036 16:02:30 -- event/cpu_locks.sh@26 -- # local lock_files 00:10:01.036 16:02:30 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:10:01.036 16:02:30 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:10:01.036 16:02:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:01.036 16:02:30 -- common/autotest_common.sh@10 -- # set +x 00:10:01.036 16:02:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:01.036 16:02:30 -- event/cpu_locks.sh@71 -- # locks_exist 72874 00:10:01.036 16:02:30 -- event/cpu_locks.sh@22 -- # lslocks -p 72874 00:10:01.036 16:02:30 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:01.602 16:02:31 -- event/cpu_locks.sh@73 -- # killprocess 72874 00:10:01.602 16:02:31 -- common/autotest_common.sh@936 -- # '[' -z 72874 ']' 00:10:01.602 16:02:31 -- common/autotest_common.sh@940 -- # kill -0 72874 00:10:01.602 16:02:31 -- common/autotest_common.sh@941 -- # uname 00:10:01.602 16:02:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:01.602 16:02:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72874 00:10:01.602 16:02:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:01.602 killing process with pid 72874 00:10:01.602 16:02:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:01.602 16:02:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72874' 00:10:01.602 16:02:31 -- common/autotest_common.sh@955 -- # kill 72874 00:10:01.602 16:02:31 -- common/autotest_common.sh@960 -- # wait 72874 00:10:02.168 00:10:02.168 real 0m1.904s 00:10:02.168 user 0m2.067s 00:10:02.168 sys 0m0.572s 00:10:02.168 ************************************ 00:10:02.168 END TEST default_locks_via_rpc 00:10:02.168 ************************************ 00:10:02.168 16:02:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:02.168 16:02:31 -- common/autotest_common.sh@10 -- # set +x 00:10:02.168 16:02:31 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:10:02.168 16:02:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:02.168 16:02:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:02.168 16:02:31 -- common/autotest_common.sh@10 -- # set +x 00:10:02.168 ************************************ 00:10:02.168 START TEST non_locking_app_on_locked_coremask 00:10:02.168 ************************************ 00:10:02.168 16:02:31 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:10:02.168 16:02:31 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=72924 00:10:02.168 16:02:31 -- event/cpu_locks.sh@81 -- # waitforlisten 72924 /var/tmp/spdk.sock 00:10:02.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:02.168 16:02:31 -- common/autotest_common.sh@817 -- # '[' -z 72924 ']' 00:10:02.168 16:02:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:02.168 16:02:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:02.168 16:02:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:02.168 16:02:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:02.168 16:02:31 -- common/autotest_common.sh@10 -- # set +x 00:10:02.168 16:02:31 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:02.168 [2024-04-15 16:02:32.013808] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:10:02.168 [2024-04-15 16:02:32.013930] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72924 ] 00:10:02.426 [2024-04-15 16:02:32.155249] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.426 [2024-04-15 16:02:32.210784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.360 16:02:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:03.360 16:02:32 -- common/autotest_common.sh@850 -- # return 0 00:10:03.360 16:02:32 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=72940 00:10:03.360 16:02:32 -- event/cpu_locks.sh@85 -- # waitforlisten 72940 /var/tmp/spdk2.sock 00:10:03.360 16:02:32 -- common/autotest_common.sh@817 -- # '[' -z 72940 ']' 00:10:03.360 16:02:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:03.360 16:02:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:03.360 16:02:32 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:10:03.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:03.360 16:02:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:03.360 16:02:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:03.360 16:02:32 -- common/autotest_common.sh@10 -- # set +x 00:10:03.360 [2024-04-15 16:02:33.063197] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:10:03.360 [2024-04-15 16:02:33.063308] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72940 ] 00:10:03.360 [2024-04-15 16:02:33.213634] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:03.360 [2024-04-15 16:02:33.213705] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.360 [2024-04-15 16:02:33.315587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.296 16:02:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:04.296 16:02:34 -- common/autotest_common.sh@850 -- # return 0 00:10:04.296 16:02:34 -- event/cpu_locks.sh@87 -- # locks_exist 72924 00:10:04.296 16:02:34 -- event/cpu_locks.sh@22 -- # lslocks -p 72924 00:10:04.296 16:02:34 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:04.862 16:02:34 -- event/cpu_locks.sh@89 -- # killprocess 72924 00:10:04.862 16:02:34 -- common/autotest_common.sh@936 -- # '[' -z 72924 ']' 00:10:04.862 16:02:34 -- common/autotest_common.sh@940 -- # kill -0 72924 00:10:04.862 16:02:34 -- common/autotest_common.sh@941 -- # uname 00:10:04.862 16:02:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:04.862 16:02:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72924 00:10:05.121 16:02:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:05.121 killing process with pid 72924 00:10:05.121 16:02:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:05.121 16:02:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72924' 00:10:05.121 16:02:34 -- common/autotest_common.sh@955 -- # kill 72924 00:10:05.121 16:02:34 -- common/autotest_common.sh@960 -- # wait 72924 00:10:05.711 16:02:35 -- event/cpu_locks.sh@90 -- # killprocess 72940 00:10:05.711 16:02:35 -- common/autotest_common.sh@936 -- # '[' -z 72940 ']' 00:10:05.711 16:02:35 -- common/autotest_common.sh@940 -- # kill -0 72940 00:10:05.711 16:02:35 -- common/autotest_common.sh@941 -- # uname 00:10:05.711 16:02:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:05.711 16:02:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72940 00:10:05.711 16:02:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:05.711 killing process with pid 72940 00:10:05.711 16:02:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:05.711 16:02:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72940' 00:10:05.711 16:02:35 -- common/autotest_common.sh@955 -- # kill 72940 00:10:05.711 16:02:35 -- common/autotest_common.sh@960 -- # wait 72940 00:10:05.969 00:10:05.969 real 0m3.895s 00:10:05.969 user 0m4.380s 00:10:05.969 sys 0m1.102s 00:10:05.969 16:02:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:05.969 16:02:35 -- common/autotest_common.sh@10 -- # set +x 00:10:05.969 ************************************ 00:10:05.969 END TEST non_locking_app_on_locked_coremask 00:10:05.969 ************************************ 00:10:05.969 16:02:35 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:10:05.969 16:02:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:05.969 16:02:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:05.969 16:02:35 -- common/autotest_common.sh@10 -- # set +x 00:10:06.226 ************************************ 00:10:06.226 START TEST locking_app_on_unlocked_coremask 00:10:06.226 ************************************ 00:10:06.226 16:02:35 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:10:06.226 16:02:35 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=73011 00:10:06.226 16:02:35 -- event/cpu_locks.sh@99 -- # waitforlisten 73011 /var/tmp/spdk.sock 00:10:06.226 16:02:35 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:10:06.226 16:02:35 -- common/autotest_common.sh@817 -- # '[' -z 73011 ']' 00:10:06.226 16:02:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.226 16:02:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:06.226 16:02:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.226 16:02:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:06.226 16:02:35 -- common/autotest_common.sh@10 -- # set +x 00:10:06.226 [2024-04-15 16:02:36.021596] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:10:06.226 [2024-04-15 16:02:36.021676] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73011 ] 00:10:06.226 [2024-04-15 16:02:36.164425] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:06.226 [2024-04-15 16:02:36.164487] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.484 [2024-04-15 16:02:36.221521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.484 16:02:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:06.484 16:02:36 -- common/autotest_common.sh@850 -- # return 0 00:10:06.484 16:02:36 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=73019 00:10:06.484 16:02:36 -- event/cpu_locks.sh@103 -- # waitforlisten 73019 /var/tmp/spdk2.sock 00:10:06.484 16:02:36 -- common/autotest_common.sh@817 -- # '[' -z 73019 ']' 00:10:06.484 16:02:36 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:06.484 16:02:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:06.484 16:02:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:06.484 16:02:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:06.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:06.484 16:02:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:06.484 16:02:36 -- common/autotest_common.sh@10 -- # set +x 00:10:06.741 [2024-04-15 16:02:36.507744] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:10:06.741 [2024-04-15 16:02:36.507844] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73019 ] 00:10:06.741 [2024-04-15 16:02:36.646521] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.999 [2024-04-15 16:02:36.751943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.564 16:02:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:07.564 16:02:37 -- common/autotest_common.sh@850 -- # return 0 00:10:07.564 16:02:37 -- event/cpu_locks.sh@105 -- # locks_exist 73019 00:10:07.564 16:02:37 -- event/cpu_locks.sh@22 -- # lslocks -p 73019 00:10:07.564 16:02:37 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:08.494 16:02:38 -- event/cpu_locks.sh@107 -- # killprocess 73011 00:10:08.494 16:02:38 -- common/autotest_common.sh@936 -- # '[' -z 73011 ']' 00:10:08.494 16:02:38 -- common/autotest_common.sh@940 -- # kill -0 73011 00:10:08.494 16:02:38 -- common/autotest_common.sh@941 -- # uname 00:10:08.494 16:02:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:08.494 16:02:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73011 00:10:08.494 16:02:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:08.494 16:02:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:08.494 killing process with pid 73011 00:10:08.494 16:02:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73011' 00:10:08.494 16:02:38 -- common/autotest_common.sh@955 -- # kill 73011 00:10:08.494 16:02:38 -- common/autotest_common.sh@960 -- # wait 73011 00:10:09.427 16:02:39 -- event/cpu_locks.sh@108 -- # killprocess 73019 00:10:09.427 16:02:39 -- common/autotest_common.sh@936 -- # '[' -z 73019 ']' 00:10:09.427 16:02:39 -- common/autotest_common.sh@940 -- # kill -0 73019 00:10:09.427 16:02:39 -- common/autotest_common.sh@941 -- # uname 00:10:09.427 16:02:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:09.427 16:02:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73019 00:10:09.427 16:02:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:09.427 killing process with pid 73019 00:10:09.427 16:02:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:09.427 16:02:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73019' 00:10:09.427 16:02:39 -- common/autotest_common.sh@955 -- # kill 73019 00:10:09.427 16:02:39 -- common/autotest_common.sh@960 -- # wait 73019 00:10:09.685 00:10:09.685 real 0m3.441s 00:10:09.685 user 0m3.705s 00:10:09.685 sys 0m1.107s 00:10:09.685 16:02:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:09.685 16:02:39 -- common/autotest_common.sh@10 -- # set +x 00:10:09.685 ************************************ 00:10:09.685 END TEST locking_app_on_unlocked_coremask 00:10:09.685 ************************************ 00:10:09.685 16:02:39 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:10:09.685 16:02:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:09.685 16:02:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:09.685 16:02:39 -- common/autotest_common.sh@10 -- # set +x 00:10:09.685 ************************************ 00:10:09.685 START TEST locking_app_on_locked_coremask 00:10:09.685 ************************************ 00:10:09.685 16:02:39 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:10:09.685 16:02:39 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=73091 00:10:09.685 16:02:39 -- event/cpu_locks.sh@116 -- # waitforlisten 73091 /var/tmp/spdk.sock 00:10:09.685 16:02:39 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:09.685 16:02:39 -- common/autotest_common.sh@817 -- # '[' -z 73091 ']' 00:10:09.685 16:02:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:09.685 16:02:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:09.685 16:02:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:09.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:09.685 16:02:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:09.685 16:02:39 -- common/autotest_common.sh@10 -- # set +x 00:10:09.685 [2024-04-15 16:02:39.601229] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:10:09.685 [2024-04-15 16:02:39.601535] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73091 ] 00:10:09.943 [2024-04-15 16:02:39.745426] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.943 [2024-04-15 16:02:39.794363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.875 16:02:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:10.875 16:02:40 -- common/autotest_common.sh@850 -- # return 0 00:10:10.875 16:02:40 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:10.875 16:02:40 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=73107 00:10:10.875 16:02:40 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 73107 /var/tmp/spdk2.sock 00:10:10.875 16:02:40 -- common/autotest_common.sh@638 -- # local es=0 00:10:10.875 16:02:40 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 73107 /var/tmp/spdk2.sock 00:10:10.875 16:02:40 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:10:10.875 16:02:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:10.875 16:02:40 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:10:10.875 16:02:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:10.875 16:02:40 -- common/autotest_common.sh@641 -- # waitforlisten 73107 /var/tmp/spdk2.sock 00:10:10.875 16:02:40 -- common/autotest_common.sh@817 -- # '[' -z 73107 ']' 00:10:10.875 16:02:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:10.875 16:02:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:10.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:10.875 16:02:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:10.875 16:02:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:10.875 16:02:40 -- common/autotest_common.sh@10 -- # set +x 00:10:10.875 [2024-04-15 16:02:40.583856] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:10:10.875 [2024-04-15 16:02:40.583951] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73107 ] 00:10:10.875 [2024-04-15 16:02:40.714266] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 73091 has claimed it. 00:10:10.875 [2024-04-15 16:02:40.714333] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:11.439 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (73107) - No such process 00:10:11.439 ERROR: process (pid: 73107) is no longer running 00:10:11.439 16:02:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:11.439 16:02:41 -- common/autotest_common.sh@850 -- # return 1 00:10:11.439 16:02:41 -- common/autotest_common.sh@641 -- # es=1 00:10:11.439 16:02:41 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:10:11.439 16:02:41 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:10:11.439 16:02:41 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:10:11.439 16:02:41 -- event/cpu_locks.sh@122 -- # locks_exist 73091 00:10:11.439 16:02:41 -- event/cpu_locks.sh@22 -- # lslocks -p 73091 00:10:11.439 16:02:41 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:12.005 16:02:41 -- event/cpu_locks.sh@124 -- # killprocess 73091 00:10:12.005 16:02:41 -- common/autotest_common.sh@936 -- # '[' -z 73091 ']' 00:10:12.005 16:02:41 -- common/autotest_common.sh@940 -- # kill -0 73091 00:10:12.005 16:02:41 -- common/autotest_common.sh@941 -- # uname 00:10:12.005 16:02:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:12.005 16:02:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73091 00:10:12.005 16:02:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:12.005 16:02:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:12.005 killing process with pid 73091 00:10:12.005 16:02:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73091' 00:10:12.005 16:02:41 -- common/autotest_common.sh@955 -- # kill 73091 00:10:12.005 16:02:41 -- common/autotest_common.sh@960 -- # wait 73091 00:10:12.261 00:10:12.261 real 0m2.654s 00:10:12.261 user 0m3.052s 00:10:12.261 sys 0m0.665s 00:10:12.261 16:02:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:12.261 16:02:42 -- common/autotest_common.sh@10 -- # set +x 00:10:12.261 ************************************ 00:10:12.261 END TEST locking_app_on_locked_coremask 00:10:12.261 ************************************ 00:10:12.519 16:02:42 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:10:12.519 16:02:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:12.519 16:02:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:12.519 16:02:42 -- common/autotest_common.sh@10 -- # set +x 00:10:12.519 ************************************ 00:10:12.519 START TEST locking_overlapped_coremask 00:10:12.519 ************************************ 00:10:12.519 16:02:42 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:10:12.519 16:02:42 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=73158 00:10:12.519 16:02:42 -- event/cpu_locks.sh@133 -- # waitforlisten 73158 /var/tmp/spdk.sock 00:10:12.519 16:02:42 -- common/autotest_common.sh@817 -- # '[' -z 73158 ']' 00:10:12.519 16:02:42 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:10:12.519 16:02:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.519 16:02:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:12.519 16:02:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.519 16:02:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:12.519 16:02:42 -- common/autotest_common.sh@10 -- # set +x 00:10:12.519 [2024-04-15 16:02:42.372600] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:10:12.519 [2024-04-15 16:02:42.372701] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73158 ] 00:10:12.776 [2024-04-15 16:02:42.528414] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:12.776 [2024-04-15 16:02:42.583671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:12.776 [2024-04-15 16:02:42.583711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:12.776 [2024-04-15 16:02:42.583713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.033 16:02:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:13.033 16:02:42 -- common/autotest_common.sh@850 -- # return 0 00:10:13.033 16:02:42 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=73168 00:10:13.033 16:02:42 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 73168 /var/tmp/spdk2.sock 00:10:13.033 16:02:42 -- common/autotest_common.sh@638 -- # local es=0 00:10:13.033 16:02:42 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 73168 /var/tmp/spdk2.sock 00:10:13.033 16:02:42 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:10:13.033 16:02:42 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:10:13.033 16:02:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:13.033 16:02:42 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:10:13.033 16:02:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:13.033 16:02:42 -- common/autotest_common.sh@641 -- # waitforlisten 73168 /var/tmp/spdk2.sock 00:10:13.033 16:02:42 -- common/autotest_common.sh@817 -- # '[' -z 73168 ']' 00:10:13.033 16:02:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:13.033 16:02:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:13.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:13.033 16:02:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:13.033 16:02:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:13.033 16:02:42 -- common/autotest_common.sh@10 -- # set +x 00:10:13.033 [2024-04-15 16:02:42.862078] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:10:13.033 [2024-04-15 16:02:42.862169] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73168 ] 00:10:13.290 [2024-04-15 16:02:43.011520] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 73158 has claimed it. 00:10:13.290 [2024-04-15 16:02:43.024673] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:13.891 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (73168) - No such process 00:10:13.892 ERROR: process (pid: 73168) is no longer running 00:10:13.892 16:02:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:13.892 16:02:43 -- common/autotest_common.sh@850 -- # return 1 00:10:13.892 16:02:43 -- common/autotest_common.sh@641 -- # es=1 00:10:13.892 16:02:43 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:10:13.892 16:02:43 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:10:13.892 16:02:43 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:10:13.892 16:02:43 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:10:13.892 16:02:43 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:13.892 16:02:43 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:13.892 16:02:43 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:13.892 16:02:43 -- event/cpu_locks.sh@141 -- # killprocess 73158 00:10:13.892 16:02:43 -- common/autotest_common.sh@936 -- # '[' -z 73158 ']' 00:10:13.892 16:02:43 -- common/autotest_common.sh@940 -- # kill -0 73158 00:10:13.892 16:02:43 -- common/autotest_common.sh@941 -- # uname 00:10:13.892 16:02:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:13.892 16:02:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73158 00:10:13.892 16:02:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:13.892 16:02:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:13.892 16:02:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73158' 00:10:13.892 killing process with pid 73158 00:10:13.892 16:02:43 -- common/autotest_common.sh@955 -- # kill 73158 00:10:13.892 16:02:43 -- common/autotest_common.sh@960 -- # wait 73158 00:10:14.149 00:10:14.149 real 0m1.623s 00:10:14.149 user 0m4.335s 00:10:14.149 sys 0m0.379s 00:10:14.149 16:02:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:14.149 16:02:43 -- common/autotest_common.sh@10 -- # set +x 00:10:14.149 ************************************ 00:10:14.149 END TEST locking_overlapped_coremask 00:10:14.149 ************************************ 00:10:14.149 16:02:43 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:10:14.149 16:02:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:14.149 16:02:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:14.149 16:02:43 -- common/autotest_common.sh@10 -- # set +x 00:10:14.149 ************************************ 00:10:14.149 START TEST locking_overlapped_coremask_via_rpc 00:10:14.149 ************************************ 00:10:14.149 16:02:44 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:10:14.149 16:02:44 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=73218 00:10:14.149 16:02:44 -- event/cpu_locks.sh@149 -- # waitforlisten 73218 /var/tmp/spdk.sock 00:10:14.149 16:02:44 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:10:14.149 16:02:44 -- common/autotest_common.sh@817 -- # '[' -z 73218 ']' 00:10:14.149 16:02:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.149 16:02:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:14.149 16:02:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.149 16:02:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:14.149 16:02:44 -- common/autotest_common.sh@10 -- # set +x 00:10:14.406 [2024-04-15 16:02:44.143828] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:10:14.406 [2024-04-15 16:02:44.144156] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73218 ] 00:10:14.406 [2024-04-15 16:02:44.288740] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:14.406 [2024-04-15 16:02:44.288808] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:14.406 [2024-04-15 16:02:44.345300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:14.406 [2024-04-15 16:02:44.345497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:14.406 [2024-04-15 16:02:44.345499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.340 16:02:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:15.340 16:02:45 -- common/autotest_common.sh@850 -- # return 0 00:10:15.340 16:02:45 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:10:15.340 16:02:45 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=73236 00:10:15.340 16:02:45 -- event/cpu_locks.sh@153 -- # waitforlisten 73236 /var/tmp/spdk2.sock 00:10:15.340 16:02:45 -- common/autotest_common.sh@817 -- # '[' -z 73236 ']' 00:10:15.340 16:02:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:15.340 16:02:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:15.340 16:02:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:15.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:15.340 16:02:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:15.340 16:02:45 -- common/autotest_common.sh@10 -- # set +x 00:10:15.340 [2024-04-15 16:02:45.213015] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:10:15.340 [2024-04-15 16:02:45.213090] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73236 ] 00:10:15.599 [2024-04-15 16:02:45.357480] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:15.599 [2024-04-15 16:02:45.370591] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:15.599 [2024-04-15 16:02:45.461569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:15.599 [2024-04-15 16:02:45.474757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:15.599 [2024-04-15 16:02:45.474758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:10:16.534 16:02:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:16.534 16:02:46 -- common/autotest_common.sh@850 -- # return 0 00:10:16.534 16:02:46 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:10:16.534 16:02:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:16.534 16:02:46 -- common/autotest_common.sh@10 -- # set +x 00:10:16.534 16:02:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:16.534 16:02:46 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:16.534 16:02:46 -- common/autotest_common.sh@638 -- # local es=0 00:10:16.534 16:02:46 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:16.534 16:02:46 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:10:16.534 16:02:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:16.534 16:02:46 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:10:16.534 16:02:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:16.534 16:02:46 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:16.534 16:02:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:16.534 16:02:46 -- common/autotest_common.sh@10 -- # set +x 00:10:16.534 [2024-04-15 16:02:46.252720] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 73218 has claimed it. 00:10:16.534 request: 00:10:16.534 { 00:10:16.534 "method": "framework_enable_cpumask_locks", 00:10:16.534 "req_id": 1 00:10:16.534 } 00:10:16.534 Got JSON-RPC error response 00:10:16.534 response: 00:10:16.534 { 00:10:16.534 "code": -32603, 00:10:16.534 "message": "Failed to claim CPU core: 2" 00:10:16.534 } 00:10:16.534 16:02:46 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:10:16.534 16:02:46 -- common/autotest_common.sh@641 -- # es=1 00:10:16.534 16:02:46 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:10:16.534 16:02:46 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:10:16.534 16:02:46 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:10:16.534 16:02:46 -- event/cpu_locks.sh@158 -- # waitforlisten 73218 /var/tmp/spdk.sock 00:10:16.534 16:02:46 -- common/autotest_common.sh@817 -- # '[' -z 73218 ']' 00:10:16.534 16:02:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.534 16:02:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:16.534 16:02:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.534 16:02:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:16.534 16:02:46 -- common/autotest_common.sh@10 -- # set +x 00:10:16.792 16:02:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:16.792 16:02:46 -- common/autotest_common.sh@850 -- # return 0 00:10:16.792 16:02:46 -- event/cpu_locks.sh@159 -- # waitforlisten 73236 /var/tmp/spdk2.sock 00:10:16.792 16:02:46 -- common/autotest_common.sh@817 -- # '[' -z 73236 ']' 00:10:16.792 16:02:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:16.792 16:02:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:16.792 16:02:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:16.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:16.792 16:02:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:16.792 16:02:46 -- common/autotest_common.sh@10 -- # set +x 00:10:17.050 16:02:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:17.050 16:02:46 -- common/autotest_common.sh@850 -- # return 0 00:10:17.050 16:02:46 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:10:17.050 16:02:46 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:17.050 16:02:46 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:17.050 16:02:46 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:17.050 00:10:17.050 real 0m2.743s 00:10:17.050 user 0m1.458s 00:10:17.050 sys 0m0.214s 00:10:17.050 16:02:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:17.050 16:02:46 -- common/autotest_common.sh@10 -- # set +x 00:10:17.050 ************************************ 00:10:17.050 END TEST locking_overlapped_coremask_via_rpc 00:10:17.050 ************************************ 00:10:17.050 16:02:46 -- event/cpu_locks.sh@174 -- # cleanup 00:10:17.050 16:02:46 -- event/cpu_locks.sh@15 -- # [[ -z 73218 ]] 00:10:17.050 16:02:46 -- event/cpu_locks.sh@15 -- # killprocess 73218 00:10:17.050 16:02:46 -- common/autotest_common.sh@936 -- # '[' -z 73218 ']' 00:10:17.050 16:02:46 -- common/autotest_common.sh@940 -- # kill -0 73218 00:10:17.050 16:02:46 -- common/autotest_common.sh@941 -- # uname 00:10:17.050 16:02:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:17.050 16:02:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73218 00:10:17.050 16:02:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:17.050 16:02:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:17.050 16:02:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73218' 00:10:17.050 killing process with pid 73218 00:10:17.050 16:02:46 -- common/autotest_common.sh@955 -- # kill 73218 00:10:17.050 16:02:46 -- common/autotest_common.sh@960 -- # wait 73218 00:10:17.308 16:02:47 -- event/cpu_locks.sh@16 -- # [[ -z 73236 ]] 00:10:17.308 16:02:47 -- event/cpu_locks.sh@16 -- # killprocess 73236 00:10:17.308 16:02:47 -- common/autotest_common.sh@936 -- # '[' -z 73236 ']' 00:10:17.308 16:02:47 -- common/autotest_common.sh@940 -- # kill -0 73236 00:10:17.308 16:02:47 -- common/autotest_common.sh@941 -- # uname 00:10:17.308 16:02:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:17.308 16:02:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73236 00:10:17.308 16:02:47 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:10:17.308 16:02:47 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:10:17.308 killing process with pid 73236 00:10:17.308 16:02:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73236' 00:10:17.308 16:02:47 -- common/autotest_common.sh@955 -- # kill 73236 00:10:17.308 16:02:47 -- common/autotest_common.sh@960 -- # wait 73236 00:10:17.877 16:02:47 -- event/cpu_locks.sh@18 -- # rm -f 00:10:17.877 16:02:47 -- event/cpu_locks.sh@1 -- # cleanup 00:10:17.877 16:02:47 -- event/cpu_locks.sh@15 -- # [[ -z 73218 ]] 00:10:17.877 16:02:47 -- event/cpu_locks.sh@15 -- # killprocess 73218 00:10:17.877 16:02:47 -- common/autotest_common.sh@936 -- # '[' -z 73218 ']' 00:10:17.877 16:02:47 -- common/autotest_common.sh@940 -- # kill -0 73218 00:10:17.877 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (73218) - No such process 00:10:17.877 Process with pid 73218 is not found 00:10:17.877 16:02:47 -- common/autotest_common.sh@963 -- # echo 'Process with pid 73218 is not found' 00:10:17.877 16:02:47 -- event/cpu_locks.sh@16 -- # [[ -z 73236 ]] 00:10:17.877 16:02:47 -- event/cpu_locks.sh@16 -- # killprocess 73236 00:10:17.877 16:02:47 -- common/autotest_common.sh@936 -- # '[' -z 73236 ']' 00:10:17.877 16:02:47 -- common/autotest_common.sh@940 -- # kill -0 73236 00:10:17.877 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (73236) - No such process 00:10:17.877 Process with pid 73236 is not found 00:10:17.877 16:02:47 -- common/autotest_common.sh@963 -- # echo 'Process with pid 73236 is not found' 00:10:17.877 16:02:47 -- event/cpu_locks.sh@18 -- # rm -f 00:10:17.877 00:10:17.877 real 0m19.653s 00:10:17.877 user 0m33.694s 00:10:17.877 sys 0m5.644s 00:10:17.877 16:02:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:17.877 16:02:47 -- common/autotest_common.sh@10 -- # set +x 00:10:17.877 ************************************ 00:10:17.877 END TEST cpu_locks 00:10:17.877 ************************************ 00:10:17.877 00:10:17.877 real 0m47.189s 00:10:17.877 user 1m29.664s 00:10:17.877 sys 0m9.839s 00:10:17.877 16:02:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:17.877 ************************************ 00:10:17.877 END TEST event 00:10:17.877 16:02:47 -- common/autotest_common.sh@10 -- # set +x 00:10:17.877 ************************************ 00:10:17.877 16:02:47 -- spdk/autotest.sh@178 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:17.877 16:02:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:17.877 16:02:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:17.877 16:02:47 -- common/autotest_common.sh@10 -- # set +x 00:10:17.877 ************************************ 00:10:17.877 START TEST thread 00:10:17.877 ************************************ 00:10:17.877 16:02:47 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:18.137 * Looking for test storage... 00:10:18.137 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:10:18.137 16:02:47 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:18.137 16:02:47 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:10:18.137 16:02:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:18.137 16:02:47 -- common/autotest_common.sh@10 -- # set +x 00:10:18.137 ************************************ 00:10:18.137 START TEST thread_poller_perf 00:10:18.137 ************************************ 00:10:18.137 16:02:47 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:18.137 [2024-04-15 16:02:47.966663] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:10:18.137 [2024-04-15 16:02:47.966743] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73367 ] 00:10:18.395 [2024-04-15 16:02:48.110473] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.395 [2024-04-15 16:02:48.164400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.395 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:10:18.395 [2024-04-15 16:02:48.164485] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:10:19.371 ====================================== 00:10:19.371 busy:2107821846 (cyc) 00:10:19.371 total_run_count: 349000 00:10:19.371 tsc_hz: 2100000000 (cyc) 00:10:19.371 ====================================== 00:10:19.371 poller_cost: 6039 (cyc), 2875 (nsec) 00:10:19.371 00:10:19.371 real 0m1.288s 00:10:19.371 user 0m1.128s 00:10:19.371 sys 0m0.052s 00:10:19.371 16:02:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:19.371 ************************************ 00:10:19.371 END TEST thread_poller_perf 00:10:19.371 ************************************ 00:10:19.371 16:02:49 -- common/autotest_common.sh@10 -- # set +x 00:10:19.371 16:02:49 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:19.371 16:02:49 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:10:19.371 16:02:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:19.371 16:02:49 -- common/autotest_common.sh@10 -- # set +x 00:10:19.630 ************************************ 00:10:19.630 START TEST thread_poller_perf 00:10:19.630 ************************************ 00:10:19.630 16:02:49 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:19.630 [2024-04-15 16:02:49.402660] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:10:19.630 [2024-04-15 16:02:49.402759] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73407 ] 00:10:19.630 [2024-04-15 16:02:49.546276] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.630 [2024-04-15 16:02:49.593381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.630 [2024-04-15 16:02:49.593452] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:10:19.630 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:10:21.010 ====================================== 00:10:21.010 busy:2102075590 (cyc) 00:10:21.010 total_run_count: 4832000 00:10:21.010 tsc_hz: 2100000000 (cyc) 00:10:21.010 ====================================== 00:10:21.010 poller_cost: 435 (cyc), 207 (nsec) 00:10:21.010 00:10:21.010 real 0m1.282s 00:10:21.010 user 0m1.121s 00:10:21.010 sys 0m0.051s 00:10:21.010 16:02:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:21.010 16:02:50 -- common/autotest_common.sh@10 -- # set +x 00:10:21.010 ************************************ 00:10:21.010 END TEST thread_poller_perf 00:10:21.010 ************************************ 00:10:21.010 16:02:50 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:10:21.010 00:10:21.010 real 0m2.953s 00:10:21.010 user 0m2.376s 00:10:21.010 sys 0m0.331s 00:10:21.010 16:02:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:21.010 16:02:50 -- common/autotest_common.sh@10 -- # set +x 00:10:21.010 ************************************ 00:10:21.010 END TEST thread 00:10:21.010 ************************************ 00:10:21.010 16:02:50 -- spdk/autotest.sh@179 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:10:21.010 16:02:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:21.010 16:02:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:21.010 16:02:50 -- common/autotest_common.sh@10 -- # set +x 00:10:21.010 ************************************ 00:10:21.010 START TEST accel 00:10:21.010 ************************************ 00:10:21.010 16:02:50 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:10:21.010 * Looking for test storage... 00:10:21.010 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:10:21.010 16:02:50 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:10:21.010 16:02:50 -- accel/accel.sh@82 -- # get_expected_opcs 00:10:21.010 16:02:50 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:21.010 16:02:50 -- accel/accel.sh@62 -- # spdk_tgt_pid=73486 00:10:21.010 16:02:50 -- accel/accel.sh@63 -- # waitforlisten 73486 00:10:21.010 16:02:50 -- common/autotest_common.sh@817 -- # '[' -z 73486 ']' 00:10:21.010 16:02:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.010 16:02:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:21.010 16:02:50 -- accel/accel.sh@61 -- # build_accel_config 00:10:21.010 16:02:50 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:21.010 16:02:50 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:21.010 16:02:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.010 16:02:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:21.010 16:02:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:21.010 16:02:50 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:21.010 16:02:50 -- accel/accel.sh@40 -- # local IFS=, 00:10:21.010 16:02:50 -- accel/accel.sh@41 -- # jq -r . 00:10:21.010 16:02:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:21.010 16:02:50 -- common/autotest_common.sh@10 -- # set +x 00:10:21.010 16:02:50 -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:10:21.268 [2024-04-15 16:02:51.028134] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:10:21.268 [2024-04-15 16:02:51.028234] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73486 ] 00:10:21.268 [2024-04-15 16:02:51.171417] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.268 [2024-04-15 16:02:51.225955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.527 16:02:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:21.527 16:02:51 -- common/autotest_common.sh@850 -- # return 0 00:10:21.527 16:02:51 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:10:21.527 16:02:51 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:10:21.527 16:02:51 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:10:21.527 16:02:51 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:10:21.527 16:02:51 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:10:21.527 16:02:51 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:10:21.527 16:02:51 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:10:21.527 16:02:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:21.527 16:02:51 -- common/autotest_common.sh@10 -- # set +x 00:10:21.527 16:02:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:21.786 16:02:51 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:21.786 16:02:51 -- accel/accel.sh@72 -- # IFS== 00:10:21.786 16:02:51 -- accel/accel.sh@72 -- # read -r opc module 00:10:21.787 16:02:51 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:21.787 16:02:51 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:21.787 16:02:51 -- accel/accel.sh@72 -- # IFS== 00:10:21.787 16:02:51 -- accel/accel.sh@72 -- # read -r opc module 00:10:21.787 16:02:51 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:21.787 16:02:51 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:21.787 16:02:51 -- accel/accel.sh@72 -- # IFS== 00:10:21.787 16:02:51 -- accel/accel.sh@72 -- # read -r opc module 00:10:21.787 16:02:51 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:21.787 16:02:51 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:21.787 16:02:51 -- accel/accel.sh@72 -- # IFS== 00:10:21.787 16:02:51 -- accel/accel.sh@72 -- # read -r opc module 00:10:21.787 16:02:51 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:21.787 16:02:51 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:21.787 16:02:51 -- accel/accel.sh@72 -- # IFS== 00:10:21.787 16:02:51 -- accel/accel.sh@72 -- # read -r opc module 00:10:21.787 16:02:51 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:21.787 16:02:51 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:21.787 16:02:51 -- accel/accel.sh@72 -- # IFS== 00:10:21.787 16:02:51 -- accel/accel.sh@72 -- # read -r opc module 00:10:21.787 16:02:51 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:21.787 16:02:51 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:21.787 16:02:51 -- accel/accel.sh@72 -- # IFS== 00:10:21.787 16:02:51 -- accel/accel.sh@72 -- # read -r opc module 00:10:21.787 16:02:51 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:21.787 16:02:51 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:21.787 16:02:51 -- accel/accel.sh@72 -- # IFS== 00:10:21.787 16:02:51 -- accel/accel.sh@72 -- # read -r opc module 00:10:21.787 16:02:51 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:21.787 16:02:51 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:21.787 16:02:51 -- accel/accel.sh@72 -- # IFS== 00:10:21.787 16:02:51 -- accel/accel.sh@72 -- # read -r opc module 00:10:21.787 16:02:51 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:21.787 16:02:51 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:21.787 16:02:51 -- accel/accel.sh@72 -- # IFS== 00:10:21.787 16:02:51 -- accel/accel.sh@72 -- # read -r opc module 00:10:21.787 16:02:51 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:21.787 16:02:51 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:21.787 16:02:51 -- accel/accel.sh@72 -- # IFS== 00:10:21.787 16:02:51 -- accel/accel.sh@72 -- # read -r opc module 00:10:21.787 16:02:51 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:21.787 16:02:51 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:21.787 16:02:51 -- accel/accel.sh@72 -- # IFS== 00:10:21.787 16:02:51 -- accel/accel.sh@72 -- # read -r opc module 00:10:21.787 16:02:51 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:21.787 16:02:51 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:21.787 16:02:51 -- accel/accel.sh@72 -- # IFS== 00:10:21.787 16:02:51 -- accel/accel.sh@72 -- # read -r opc module 00:10:21.787 16:02:51 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:21.787 16:02:51 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:21.787 16:02:51 -- accel/accel.sh@72 -- # IFS== 00:10:21.787 16:02:51 -- accel/accel.sh@72 -- # read -r opc module 00:10:21.787 16:02:51 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:21.787 16:02:51 -- accel/accel.sh@75 -- # killprocess 73486 00:10:21.787 16:02:51 -- common/autotest_common.sh@936 -- # '[' -z 73486 ']' 00:10:21.787 16:02:51 -- common/autotest_common.sh@940 -- # kill -0 73486 00:10:21.787 16:02:51 -- common/autotest_common.sh@941 -- # uname 00:10:21.787 16:02:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:21.787 16:02:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73486 00:10:21.787 16:02:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:21.787 16:02:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:21.787 16:02:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73486' 00:10:21.787 killing process with pid 73486 00:10:21.787 16:02:51 -- common/autotest_common.sh@955 -- # kill 73486 00:10:21.787 16:02:51 -- common/autotest_common.sh@960 -- # wait 73486 00:10:22.047 16:02:51 -- accel/accel.sh@76 -- # trap - ERR 00:10:22.047 16:02:51 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:10:22.047 16:02:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:22.047 16:02:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:22.047 16:02:51 -- common/autotest_common.sh@10 -- # set +x 00:10:22.047 16:02:51 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:10:22.047 16:02:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:10:22.047 16:02:51 -- accel/accel.sh@12 -- # build_accel_config 00:10:22.047 16:02:51 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:22.047 16:02:51 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:22.047 16:02:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:22.047 16:02:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:22.047 16:02:51 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:22.047 16:02:51 -- accel/accel.sh@40 -- # local IFS=, 00:10:22.047 16:02:51 -- accel/accel.sh@41 -- # jq -r . 00:10:22.047 16:02:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:22.047 16:02:51 -- common/autotest_common.sh@10 -- # set +x 00:10:22.306 16:02:52 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:10:22.306 16:02:52 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:10:22.306 16:02:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:22.306 16:02:52 -- common/autotest_common.sh@10 -- # set +x 00:10:22.306 ************************************ 00:10:22.306 START TEST accel_missing_filename 00:10:22.306 ************************************ 00:10:22.306 16:02:52 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:10:22.306 16:02:52 -- common/autotest_common.sh@638 -- # local es=0 00:10:22.306 16:02:52 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:10:22.306 16:02:52 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:10:22.306 16:02:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:22.306 16:02:52 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:10:22.306 16:02:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:22.306 16:02:52 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:10:22.306 16:02:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:10:22.306 16:02:52 -- accel/accel.sh@12 -- # build_accel_config 00:10:22.306 16:02:52 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:22.306 16:02:52 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:22.306 16:02:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:22.306 16:02:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:22.306 16:02:52 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:22.306 16:02:52 -- accel/accel.sh@40 -- # local IFS=, 00:10:22.306 16:02:52 -- accel/accel.sh@41 -- # jq -r . 00:10:22.306 [2024-04-15 16:02:52.145838] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:10:22.306 [2024-04-15 16:02:52.146105] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73545 ] 00:10:22.566 [2024-04-15 16:02:52.293237] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.566 [2024-04-15 16:02:52.345919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.566 [2024-04-15 16:02:52.346920] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:10:22.566 [2024-04-15 16:02:52.395695] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:22.566 [2024-04-15 16:02:52.461624] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:10:22.824 A filename is required. 00:10:22.824 16:02:52 -- common/autotest_common.sh@641 -- # es=234 00:10:22.824 16:02:52 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:10:22.824 16:02:52 -- common/autotest_common.sh@650 -- # es=106 00:10:22.824 16:02:52 -- common/autotest_common.sh@651 -- # case "$es" in 00:10:22.824 16:02:52 -- common/autotest_common.sh@658 -- # es=1 00:10:22.824 16:02:52 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:10:22.824 00:10:22.824 real 0m0.423s 00:10:22.824 user 0m0.239s 00:10:22.824 sys 0m0.115s 00:10:22.824 16:02:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:22.824 16:02:52 -- common/autotest_common.sh@10 -- # set +x 00:10:22.824 ************************************ 00:10:22.824 END TEST accel_missing_filename 00:10:22.824 ************************************ 00:10:22.824 16:02:52 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:22.824 16:02:52 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:10:22.824 16:02:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:22.824 16:02:52 -- common/autotest_common.sh@10 -- # set +x 00:10:22.824 ************************************ 00:10:22.824 START TEST accel_compress_verify 00:10:22.824 ************************************ 00:10:22.824 16:02:52 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:22.824 16:02:52 -- common/autotest_common.sh@638 -- # local es=0 00:10:22.824 16:02:52 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:22.824 16:02:52 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:10:22.824 16:02:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:22.824 16:02:52 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:10:22.824 16:02:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:22.824 16:02:52 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:22.824 16:02:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:22.824 16:02:52 -- accel/accel.sh@12 -- # build_accel_config 00:10:22.824 16:02:52 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:22.824 16:02:52 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:22.824 16:02:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:22.824 16:02:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:22.824 16:02:52 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:22.824 16:02:52 -- accel/accel.sh@40 -- # local IFS=, 00:10:22.824 16:02:52 -- accel/accel.sh@41 -- # jq -r . 00:10:22.824 [2024-04-15 16:02:52.699981] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:10:22.824 [2024-04-15 16:02:52.700237] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73568 ] 00:10:23.083 [2024-04-15 16:02:52.845281] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.083 [2024-04-15 16:02:52.899876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.083 [2024-04-15 16:02:52.900981] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:10:23.083 [2024-04-15 16:02:52.949629] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:23.083 [2024-04-15 16:02:53.014756] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:10:23.341 00:10:23.341 Compression does not support the verify option, aborting. 00:10:23.341 16:02:53 -- common/autotest_common.sh@641 -- # es=161 00:10:23.341 16:02:53 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:10:23.341 16:02:53 -- common/autotest_common.sh@650 -- # es=33 00:10:23.341 ************************************ 00:10:23.341 END TEST accel_compress_verify 00:10:23.341 ************************************ 00:10:23.341 16:02:53 -- common/autotest_common.sh@651 -- # case "$es" in 00:10:23.341 16:02:53 -- common/autotest_common.sh@658 -- # es=1 00:10:23.341 16:02:53 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:10:23.341 00:10:23.341 real 0m0.423s 00:10:23.341 user 0m0.244s 00:10:23.341 sys 0m0.109s 00:10:23.341 16:02:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:23.341 16:02:53 -- common/autotest_common.sh@10 -- # set +x 00:10:23.341 16:02:53 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:10:23.341 16:02:53 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:10:23.341 16:02:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:23.341 16:02:53 -- common/autotest_common.sh@10 -- # set +x 00:10:23.341 ************************************ 00:10:23.341 START TEST accel_wrong_workload 00:10:23.341 ************************************ 00:10:23.341 16:02:53 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:10:23.341 16:02:53 -- common/autotest_common.sh@638 -- # local es=0 00:10:23.341 16:02:53 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:10:23.341 16:02:53 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:10:23.341 16:02:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:23.341 16:02:53 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:10:23.341 16:02:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:23.341 16:02:53 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:10:23.341 16:02:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:10:23.341 16:02:53 -- accel/accel.sh@12 -- # build_accel_config 00:10:23.341 16:02:53 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:23.341 16:02:53 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:23.341 16:02:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:23.341 16:02:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:23.341 16:02:53 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:23.341 16:02:53 -- accel/accel.sh@40 -- # local IFS=, 00:10:23.341 16:02:53 -- accel/accel.sh@41 -- # jq -r . 00:10:23.341 Unsupported workload type: foobar 00:10:23.341 [2024-04-15 16:02:53.257178] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:10:23.341 accel_perf options: 00:10:23.341 [-h help message] 00:10:23.341 [-q queue depth per core] 00:10:23.341 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:10:23.341 [-T number of threads per core 00:10:23.341 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:10:23.341 [-t time in seconds] 00:10:23.341 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:10:23.341 [ dif_verify, , dif_generate, dif_generate_copy 00:10:23.341 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:10:23.341 [-l for compress/decompress workloads, name of uncompressed input file 00:10:23.341 [-S for crc32c workload, use this seed value (default 0) 00:10:23.341 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:10:23.341 [-f for fill workload, use this BYTE value (default 255) 00:10:23.341 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:10:23.341 [-y verify result if this switch is on] 00:10:23.341 [-a tasks to allocate per core (default: same value as -q)] 00:10:23.341 Can be used to spread operations across a wider range of memory. 00:10:23.341 16:02:53 -- common/autotest_common.sh@641 -- # es=1 00:10:23.341 16:02:53 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:10:23.341 16:02:53 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:10:23.341 16:02:53 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:10:23.341 00:10:23.341 real 0m0.046s 00:10:23.341 user 0m0.058s 00:10:23.341 sys 0m0.021s 00:10:23.341 16:02:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:23.341 16:02:53 -- common/autotest_common.sh@10 -- # set +x 00:10:23.341 ************************************ 00:10:23.341 END TEST accel_wrong_workload 00:10:23.341 ************************************ 00:10:23.600 16:02:53 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:10:23.600 16:02:53 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:10:23.600 16:02:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:23.600 16:02:53 -- common/autotest_common.sh@10 -- # set +x 00:10:23.600 ************************************ 00:10:23.600 START TEST accel_negative_buffers 00:10:23.600 ************************************ 00:10:23.600 16:02:53 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:10:23.600 16:02:53 -- common/autotest_common.sh@638 -- # local es=0 00:10:23.600 16:02:53 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:10:23.600 16:02:53 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:10:23.600 16:02:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:23.600 16:02:53 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:10:23.600 16:02:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:23.600 16:02:53 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:10:23.600 16:02:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:10:23.600 16:02:53 -- accel/accel.sh@12 -- # build_accel_config 00:10:23.600 16:02:53 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:23.600 16:02:53 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:23.600 16:02:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:23.600 16:02:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:23.600 16:02:53 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:23.600 16:02:53 -- accel/accel.sh@40 -- # local IFS=, 00:10:23.600 16:02:53 -- accel/accel.sh@41 -- # jq -r . 00:10:23.600 -x option must be non-negative. 00:10:23.600 [2024-04-15 16:02:53.432461] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:10:23.600 accel_perf options: 00:10:23.600 [-h help message] 00:10:23.600 [-q queue depth per core] 00:10:23.600 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:10:23.600 [-T number of threads per core 00:10:23.600 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:10:23.600 [-t time in seconds] 00:10:23.600 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:10:23.600 [ dif_verify, , dif_generate, dif_generate_copy 00:10:23.600 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:10:23.600 [-l for compress/decompress workloads, name of uncompressed input file 00:10:23.600 [-S for crc32c workload, use this seed value (default 0) 00:10:23.600 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:10:23.600 [-f for fill workload, use this BYTE value (default 255) 00:10:23.600 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:10:23.600 [-y verify result if this switch is on] 00:10:23.600 [-a tasks to allocate per core (default: same value as -q)] 00:10:23.600 Can be used to spread operations across a wider range of memory. 00:10:23.600 16:02:53 -- common/autotest_common.sh@641 -- # es=1 00:10:23.600 16:02:53 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:10:23.600 16:02:53 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:10:23.600 16:02:53 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:10:23.600 00:10:23.600 real 0m0.034s 00:10:23.600 user 0m0.016s 00:10:23.600 sys 0m0.012s 00:10:23.600 16:02:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:23.600 16:02:53 -- common/autotest_common.sh@10 -- # set +x 00:10:23.600 ************************************ 00:10:23.600 END TEST accel_negative_buffers 00:10:23.600 ************************************ 00:10:23.600 16:02:53 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:10:23.600 16:02:53 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:10:23.600 16:02:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:23.600 16:02:53 -- common/autotest_common.sh@10 -- # set +x 00:10:23.893 ************************************ 00:10:23.893 START TEST accel_crc32c 00:10:23.893 ************************************ 00:10:23.893 16:02:53 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:10:23.893 16:02:53 -- accel/accel.sh@16 -- # local accel_opc 00:10:23.893 16:02:53 -- accel/accel.sh@17 -- # local accel_module 00:10:23.893 16:02:53 -- accel/accel.sh@19 -- # IFS=: 00:10:23.893 16:02:53 -- accel/accel.sh@19 -- # read -r var val 00:10:23.893 16:02:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:10:23.893 16:02:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:10:23.893 16:02:53 -- accel/accel.sh@12 -- # build_accel_config 00:10:23.893 16:02:53 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:23.893 16:02:53 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:23.893 16:02:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:23.893 16:02:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:23.893 16:02:53 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:23.893 16:02:53 -- accel/accel.sh@40 -- # local IFS=, 00:10:23.893 16:02:53 -- accel/accel.sh@41 -- # jq -r . 00:10:23.893 [2024-04-15 16:02:53.600517] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:10:23.893 [2024-04-15 16:02:53.600804] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73646 ] 00:10:23.893 [2024-04-15 16:02:53.748358] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.893 [2024-04-15 16:02:53.801319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.893 [2024-04-15 16:02:53.802296] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:10:23.893 16:02:53 -- accel/accel.sh@20 -- # val= 00:10:23.893 16:02:53 -- accel/accel.sh@21 -- # case "$var" in 00:10:23.893 16:02:53 -- accel/accel.sh@19 -- # IFS=: 00:10:23.893 16:02:53 -- accel/accel.sh@19 -- # read -r var val 00:10:23.893 16:02:53 -- accel/accel.sh@20 -- # val= 00:10:23.893 16:02:53 -- accel/accel.sh@21 -- # case "$var" in 00:10:23.893 16:02:53 -- accel/accel.sh@19 -- # IFS=: 00:10:23.893 16:02:53 -- accel/accel.sh@19 -- # read -r var val 00:10:23.893 16:02:53 -- accel/accel.sh@20 -- # val=0x1 00:10:23.893 16:02:53 -- accel/accel.sh@21 -- # case "$var" in 00:10:23.893 16:02:53 -- accel/accel.sh@19 -- # IFS=: 00:10:24.151 16:02:53 -- accel/accel.sh@19 -- # read -r var val 00:10:24.151 16:02:53 -- accel/accel.sh@20 -- # val= 00:10:24.151 16:02:53 -- accel/accel.sh@21 -- # case "$var" in 00:10:24.151 16:02:53 -- accel/accel.sh@19 -- # IFS=: 00:10:24.151 16:02:53 -- accel/accel.sh@19 -- # read -r var val 00:10:24.151 16:02:53 -- accel/accel.sh@20 -- # val= 00:10:24.151 16:02:53 -- accel/accel.sh@21 -- # case "$var" in 00:10:24.151 16:02:53 -- accel/accel.sh@19 -- # IFS=: 00:10:24.152 16:02:53 -- accel/accel.sh@19 -- # read -r var val 00:10:24.152 16:02:53 -- accel/accel.sh@20 -- # val=crc32c 00:10:24.152 16:02:53 -- accel/accel.sh@21 -- # case "$var" in 00:10:24.152 16:02:53 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:10:24.152 16:02:53 -- accel/accel.sh@19 -- # IFS=: 00:10:24.152 16:02:53 -- accel/accel.sh@19 -- # read -r var val 00:10:24.152 16:02:53 -- accel/accel.sh@20 -- # val=32 00:10:24.152 16:02:53 -- accel/accel.sh@21 -- # case "$var" in 00:10:24.152 16:02:53 -- accel/accel.sh@19 -- # IFS=: 00:10:24.152 16:02:53 -- accel/accel.sh@19 -- # read -r var val 00:10:24.152 16:02:53 -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:24.152 16:02:53 -- accel/accel.sh@21 -- # case "$var" in 00:10:24.152 16:02:53 -- accel/accel.sh@19 -- # IFS=: 00:10:24.152 16:02:53 -- accel/accel.sh@19 -- # read -r var val 00:10:24.152 16:02:53 -- accel/accel.sh@20 -- # val= 00:10:24.152 16:02:53 -- accel/accel.sh@21 -- # case "$var" in 00:10:24.152 16:02:53 -- accel/accel.sh@19 -- # IFS=: 00:10:24.152 16:02:53 -- accel/accel.sh@19 -- # read -r var val 00:10:24.152 16:02:53 -- accel/accel.sh@20 -- # val=software 00:10:24.152 16:02:53 -- accel/accel.sh@21 -- # case "$var" in 00:10:24.152 16:02:53 -- accel/accel.sh@22 -- # accel_module=software 00:10:24.152 16:02:53 -- accel/accel.sh@19 -- # IFS=: 00:10:24.152 16:02:53 -- accel/accel.sh@19 -- # read -r var val 00:10:24.152 16:02:53 -- accel/accel.sh@20 -- # val=32 00:10:24.152 16:02:53 -- accel/accel.sh@21 -- # case "$var" in 00:10:24.152 16:02:53 -- accel/accel.sh@19 -- # IFS=: 00:10:24.152 16:02:53 -- accel/accel.sh@19 -- # read -r var val 00:10:24.152 16:02:53 -- accel/accel.sh@20 -- # val=32 00:10:24.152 16:02:53 -- accel/accel.sh@21 -- # case "$var" in 00:10:24.152 16:02:53 -- accel/accel.sh@19 -- # IFS=: 00:10:24.152 16:02:53 -- accel/accel.sh@19 -- # read -r var val 00:10:24.152 16:02:53 -- accel/accel.sh@20 -- # val=1 00:10:24.152 16:02:53 -- accel/accel.sh@21 -- # case "$var" in 00:10:24.152 16:02:53 -- accel/accel.sh@19 -- # IFS=: 00:10:24.152 16:02:53 -- accel/accel.sh@19 -- # read -r var val 00:10:24.152 16:02:53 -- accel/accel.sh@20 -- # val='1 seconds' 00:10:24.152 16:02:53 -- accel/accel.sh@21 -- # case "$var" in 00:10:24.152 16:02:53 -- accel/accel.sh@19 -- # IFS=: 00:10:24.152 16:02:53 -- accel/accel.sh@19 -- # read -r var val 00:10:24.152 16:02:53 -- accel/accel.sh@20 -- # val=Yes 00:10:24.152 16:02:53 -- accel/accel.sh@21 -- # case "$var" in 00:10:24.152 16:02:53 -- accel/accel.sh@19 -- # IFS=: 00:10:24.152 16:02:53 -- accel/accel.sh@19 -- # read -r var val 00:10:24.152 16:02:53 -- accel/accel.sh@20 -- # val= 00:10:24.152 16:02:53 -- accel/accel.sh@21 -- # case "$var" in 00:10:24.152 16:02:53 -- accel/accel.sh@19 -- # IFS=: 00:10:24.152 16:02:53 -- accel/accel.sh@19 -- # read -r var val 00:10:24.152 16:02:53 -- accel/accel.sh@20 -- # val= 00:10:24.152 16:02:53 -- accel/accel.sh@21 -- # case "$var" in 00:10:24.152 16:02:53 -- accel/accel.sh@19 -- # IFS=: 00:10:24.152 16:02:53 -- accel/accel.sh@19 -- # read -r var val 00:10:25.085 16:02:54 -- accel/accel.sh@20 -- # val= 00:10:25.085 16:02:54 -- accel/accel.sh@21 -- # case "$var" in 00:10:25.085 16:02:54 -- accel/accel.sh@19 -- # IFS=: 00:10:25.085 16:02:54 -- accel/accel.sh@19 -- # read -r var val 00:10:25.085 16:02:54 -- accel/accel.sh@20 -- # val= 00:10:25.085 16:02:54 -- accel/accel.sh@21 -- # case "$var" in 00:10:25.085 16:02:54 -- accel/accel.sh@19 -- # IFS=: 00:10:25.085 16:02:54 -- accel/accel.sh@19 -- # read -r var val 00:10:25.085 16:02:54 -- accel/accel.sh@20 -- # val= 00:10:25.085 16:02:54 -- accel/accel.sh@21 -- # case "$var" in 00:10:25.085 16:02:54 -- accel/accel.sh@19 -- # IFS=: 00:10:25.085 16:02:54 -- accel/accel.sh@19 -- # read -r var val 00:10:25.086 16:02:54 -- accel/accel.sh@20 -- # val= 00:10:25.086 16:02:54 -- accel/accel.sh@21 -- # case "$var" in 00:10:25.086 16:02:54 -- accel/accel.sh@19 -- # IFS=: 00:10:25.086 16:02:54 -- accel/accel.sh@19 -- # read -r var val 00:10:25.086 16:02:54 -- accel/accel.sh@20 -- # val= 00:10:25.086 16:02:54 -- accel/accel.sh@21 -- # case "$var" in 00:10:25.086 16:02:54 -- accel/accel.sh@19 -- # IFS=: 00:10:25.086 16:02:54 -- accel/accel.sh@19 -- # read -r var val 00:10:25.086 16:02:54 -- accel/accel.sh@20 -- # val= 00:10:25.086 16:02:54 -- accel/accel.sh@21 -- # case "$var" in 00:10:25.086 16:02:54 -- accel/accel.sh@19 -- # IFS=: 00:10:25.086 16:02:54 -- accel/accel.sh@19 -- # read -r var val 00:10:25.086 16:02:54 -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:25.086 16:02:54 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:10:25.086 16:02:54 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:25.086 00:10:25.086 real 0m1.421s 00:10:25.086 user 0m1.208s 00:10:25.086 sys 0m0.111s 00:10:25.086 16:02:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:25.086 16:02:54 -- common/autotest_common.sh@10 -- # set +x 00:10:25.086 ************************************ 00:10:25.086 END TEST accel_crc32c 00:10:25.086 ************************************ 00:10:25.086 16:02:55 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:10:25.086 16:02:55 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:10:25.086 16:02:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:25.086 16:02:55 -- common/autotest_common.sh@10 -- # set +x 00:10:25.344 ************************************ 00:10:25.344 START TEST accel_crc32c_C2 00:10:25.344 ************************************ 00:10:25.344 16:02:55 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:10:25.344 16:02:55 -- accel/accel.sh@16 -- # local accel_opc 00:10:25.344 16:02:55 -- accel/accel.sh@17 -- # local accel_module 00:10:25.344 16:02:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:10:25.344 16:02:55 -- accel/accel.sh@19 -- # IFS=: 00:10:25.344 16:02:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:10:25.344 16:02:55 -- accel/accel.sh@12 -- # build_accel_config 00:10:25.344 16:02:55 -- accel/accel.sh@19 -- # read -r var val 00:10:25.344 16:02:55 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:25.344 16:02:55 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:25.344 16:02:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:25.344 16:02:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:25.344 16:02:55 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:25.344 16:02:55 -- accel/accel.sh@40 -- # local IFS=, 00:10:25.344 16:02:55 -- accel/accel.sh@41 -- # jq -r . 00:10:25.344 [2024-04-15 16:02:55.143696] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:10:25.344 [2024-04-15 16:02:55.143922] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73683 ] 00:10:25.344 [2024-04-15 16:02:55.285960] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.604 [2024-04-15 16:02:55.333732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.604 [2024-04-15 16:02:55.334625] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:10:25.604 16:02:55 -- accel/accel.sh@20 -- # val= 00:10:25.604 16:02:55 -- accel/accel.sh@21 -- # case "$var" in 00:10:25.604 16:02:55 -- accel/accel.sh@19 -- # IFS=: 00:10:25.604 16:02:55 -- accel/accel.sh@19 -- # read -r var val 00:10:25.604 16:02:55 -- accel/accel.sh@20 -- # val= 00:10:25.604 16:02:55 -- accel/accel.sh@21 -- # case "$var" in 00:10:25.604 16:02:55 -- accel/accel.sh@19 -- # IFS=: 00:10:25.604 16:02:55 -- accel/accel.sh@19 -- # read -r var val 00:10:25.604 16:02:55 -- accel/accel.sh@20 -- # val=0x1 00:10:25.604 16:02:55 -- accel/accel.sh@21 -- # case "$var" in 00:10:25.604 16:02:55 -- accel/accel.sh@19 -- # IFS=: 00:10:25.604 16:02:55 -- accel/accel.sh@19 -- # read -r var val 00:10:25.604 16:02:55 -- accel/accel.sh@20 -- # val= 00:10:25.604 16:02:55 -- accel/accel.sh@21 -- # case "$var" in 00:10:25.604 16:02:55 -- accel/accel.sh@19 -- # IFS=: 00:10:25.604 16:02:55 -- accel/accel.sh@19 -- # read -r var val 00:10:25.604 16:02:55 -- accel/accel.sh@20 -- # val= 00:10:25.604 16:02:55 -- accel/accel.sh@21 -- # case "$var" in 00:10:25.604 16:02:55 -- accel/accel.sh@19 -- # IFS=: 00:10:25.604 16:02:55 -- accel/accel.sh@19 -- # read -r var val 00:10:25.604 16:02:55 -- accel/accel.sh@20 -- # val=crc32c 00:10:25.604 16:02:55 -- accel/accel.sh@21 -- # case "$var" in 00:10:25.604 16:02:55 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:10:25.604 16:02:55 -- accel/accel.sh@19 -- # IFS=: 00:10:25.604 16:02:55 -- accel/accel.sh@19 -- # read -r var val 00:10:25.604 16:02:55 -- accel/accel.sh@20 -- # val=0 00:10:25.604 16:02:55 -- accel/accel.sh@21 -- # case "$var" in 00:10:25.604 16:02:55 -- accel/accel.sh@19 -- # IFS=: 00:10:25.604 16:02:55 -- accel/accel.sh@19 -- # read -r var val 00:10:25.604 16:02:55 -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:25.604 16:02:55 -- accel/accel.sh@21 -- # case "$var" in 00:10:25.604 16:02:55 -- accel/accel.sh@19 -- # IFS=: 00:10:25.604 16:02:55 -- accel/accel.sh@19 -- # read -r var val 00:10:25.604 16:02:55 -- accel/accel.sh@20 -- # val= 00:10:25.604 16:02:55 -- accel/accel.sh@21 -- # case "$var" in 00:10:25.604 16:02:55 -- accel/accel.sh@19 -- # IFS=: 00:10:25.604 16:02:55 -- accel/accel.sh@19 -- # read -r var val 00:10:25.604 16:02:55 -- accel/accel.sh@20 -- # val=software 00:10:25.604 16:02:55 -- accel/accel.sh@21 -- # case "$var" in 00:10:25.604 16:02:55 -- accel/accel.sh@22 -- # accel_module=software 00:10:25.604 16:02:55 -- accel/accel.sh@19 -- # IFS=: 00:10:25.604 16:02:55 -- accel/accel.sh@19 -- # read -r var val 00:10:25.604 16:02:55 -- accel/accel.sh@20 -- # val=32 00:10:25.604 16:02:55 -- accel/accel.sh@21 -- # case "$var" in 00:10:25.604 16:02:55 -- accel/accel.sh@19 -- # IFS=: 00:10:25.604 16:02:55 -- accel/accel.sh@19 -- # read -r var val 00:10:25.604 16:02:55 -- accel/accel.sh@20 -- # val=32 00:10:25.604 16:02:55 -- accel/accel.sh@21 -- # case "$var" in 00:10:25.604 16:02:55 -- accel/accel.sh@19 -- # IFS=: 00:10:25.604 16:02:55 -- accel/accel.sh@19 -- # read -r var val 00:10:25.604 16:02:55 -- accel/accel.sh@20 -- # val=1 00:10:25.604 16:02:55 -- accel/accel.sh@21 -- # case "$var" in 00:10:25.604 16:02:55 -- accel/accel.sh@19 -- # IFS=: 00:10:25.604 16:02:55 -- accel/accel.sh@19 -- # read -r var val 00:10:25.604 16:02:55 -- accel/accel.sh@20 -- # val='1 seconds' 00:10:25.604 16:02:55 -- accel/accel.sh@21 -- # case "$var" in 00:10:25.604 16:02:55 -- accel/accel.sh@19 -- # IFS=: 00:10:25.604 16:02:55 -- accel/accel.sh@19 -- # read -r var val 00:10:25.604 16:02:55 -- accel/accel.sh@20 -- # val=Yes 00:10:25.604 16:02:55 -- accel/accel.sh@21 -- # case "$var" in 00:10:25.604 16:02:55 -- accel/accel.sh@19 -- # IFS=: 00:10:25.604 16:02:55 -- accel/accel.sh@19 -- # read -r var val 00:10:25.604 16:02:55 -- accel/accel.sh@20 -- # val= 00:10:25.604 16:02:55 -- accel/accel.sh@21 -- # case "$var" in 00:10:25.604 16:02:55 -- accel/accel.sh@19 -- # IFS=: 00:10:25.604 16:02:55 -- accel/accel.sh@19 -- # read -r var val 00:10:25.604 16:02:55 -- accel/accel.sh@20 -- # val= 00:10:25.604 16:02:55 -- accel/accel.sh@21 -- # case "$var" in 00:10:25.604 16:02:55 -- accel/accel.sh@19 -- # IFS=: 00:10:25.604 16:02:55 -- accel/accel.sh@19 -- # read -r var val 00:10:26.983 16:02:56 -- accel/accel.sh@20 -- # val= 00:10:26.983 16:02:56 -- accel/accel.sh@21 -- # case "$var" in 00:10:26.983 16:02:56 -- accel/accel.sh@19 -- # IFS=: 00:10:26.983 16:02:56 -- accel/accel.sh@19 -- # read -r var val 00:10:26.983 16:02:56 -- accel/accel.sh@20 -- # val= 00:10:26.983 16:02:56 -- accel/accel.sh@21 -- # case "$var" in 00:10:26.983 16:02:56 -- accel/accel.sh@19 -- # IFS=: 00:10:26.983 16:02:56 -- accel/accel.sh@19 -- # read -r var val 00:10:26.983 16:02:56 -- accel/accel.sh@20 -- # val= 00:10:26.983 16:02:56 -- accel/accel.sh@21 -- # case "$var" in 00:10:26.983 16:02:56 -- accel/accel.sh@19 -- # IFS=: 00:10:26.983 16:02:56 -- accel/accel.sh@19 -- # read -r var val 00:10:26.983 16:02:56 -- accel/accel.sh@20 -- # val= 00:10:26.983 16:02:56 -- accel/accel.sh@21 -- # case "$var" in 00:10:26.983 16:02:56 -- accel/accel.sh@19 -- # IFS=: 00:10:26.983 16:02:56 -- accel/accel.sh@19 -- # read -r var val 00:10:26.983 16:02:56 -- accel/accel.sh@20 -- # val= 00:10:26.983 16:02:56 -- accel/accel.sh@21 -- # case "$var" in 00:10:26.983 16:02:56 -- accel/accel.sh@19 -- # IFS=: 00:10:26.983 16:02:56 -- accel/accel.sh@19 -- # read -r var val 00:10:26.983 16:02:56 -- accel/accel.sh@20 -- # val= 00:10:26.983 16:02:56 -- accel/accel.sh@21 -- # case "$var" in 00:10:26.983 16:02:56 -- accel/accel.sh@19 -- # IFS=: 00:10:26.983 16:02:56 -- accel/accel.sh@19 -- # read -r var val 00:10:26.983 16:02:56 -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:26.983 16:02:56 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:10:26.983 16:02:56 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:26.983 00:10:26.983 real 0m1.398s 00:10:26.983 user 0m1.199s 00:10:26.983 sys 0m0.102s 00:10:26.983 16:02:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:26.983 16:02:56 -- common/autotest_common.sh@10 -- # set +x 00:10:26.983 ************************************ 00:10:26.983 END TEST accel_crc32c_C2 00:10:26.983 ************************************ 00:10:26.983 16:02:56 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:10:26.983 16:02:56 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:10:26.983 16:02:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:26.983 16:02:56 -- common/autotest_common.sh@10 -- # set +x 00:10:26.983 ************************************ 00:10:26.983 START TEST accel_copy 00:10:26.983 ************************************ 00:10:26.983 16:02:56 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:10:26.983 16:02:56 -- accel/accel.sh@16 -- # local accel_opc 00:10:26.983 16:02:56 -- accel/accel.sh@17 -- # local accel_module 00:10:26.983 16:02:56 -- accel/accel.sh@19 -- # IFS=: 00:10:26.983 16:02:56 -- accel/accel.sh@19 -- # read -r var val 00:10:26.983 16:02:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:10:26.983 16:02:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:10:26.983 16:02:56 -- accel/accel.sh@12 -- # build_accel_config 00:10:26.983 16:02:56 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:26.983 16:02:56 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:26.983 16:02:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:26.983 16:02:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:26.983 16:02:56 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:26.983 16:02:56 -- accel/accel.sh@40 -- # local IFS=, 00:10:26.984 16:02:56 -- accel/accel.sh@41 -- # jq -r . 00:10:26.984 [2024-04-15 16:02:56.679567] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:10:26.984 [2024-04-15 16:02:56.679807] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73723 ] 00:10:26.984 [2024-04-15 16:02:56.825300] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.984 [2024-04-15 16:02:56.876961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.984 [2024-04-15 16:02:56.877849] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:10:26.984 16:02:56 -- accel/accel.sh@20 -- # val= 00:10:26.984 16:02:56 -- accel/accel.sh@21 -- # case "$var" in 00:10:26.984 16:02:56 -- accel/accel.sh@19 -- # IFS=: 00:10:26.984 16:02:56 -- accel/accel.sh@19 -- # read -r var val 00:10:26.984 16:02:56 -- accel/accel.sh@20 -- # val= 00:10:26.984 16:02:56 -- accel/accel.sh@21 -- # case "$var" in 00:10:26.984 16:02:56 -- accel/accel.sh@19 -- # IFS=: 00:10:26.984 16:02:56 -- accel/accel.sh@19 -- # read -r var val 00:10:26.984 16:02:56 -- accel/accel.sh@20 -- # val=0x1 00:10:26.984 16:02:56 -- accel/accel.sh@21 -- # case "$var" in 00:10:26.984 16:02:56 -- accel/accel.sh@19 -- # IFS=: 00:10:26.984 16:02:56 -- accel/accel.sh@19 -- # read -r var val 00:10:26.984 16:02:56 -- accel/accel.sh@20 -- # val= 00:10:26.984 16:02:56 -- accel/accel.sh@21 -- # case "$var" in 00:10:26.984 16:02:56 -- accel/accel.sh@19 -- # IFS=: 00:10:26.984 16:02:56 -- accel/accel.sh@19 -- # read -r var val 00:10:26.984 16:02:56 -- accel/accel.sh@20 -- # val= 00:10:26.984 16:02:56 -- accel/accel.sh@21 -- # case "$var" in 00:10:26.984 16:02:56 -- accel/accel.sh@19 -- # IFS=: 00:10:26.984 16:02:56 -- accel/accel.sh@19 -- # read -r var val 00:10:26.984 16:02:56 -- accel/accel.sh@20 -- # val=copy 00:10:26.984 16:02:56 -- accel/accel.sh@21 -- # case "$var" in 00:10:26.984 16:02:56 -- accel/accel.sh@23 -- # accel_opc=copy 00:10:26.984 16:02:56 -- accel/accel.sh@19 -- # IFS=: 00:10:26.984 16:02:56 -- accel/accel.sh@19 -- # read -r var val 00:10:26.984 16:02:56 -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:26.984 16:02:56 -- accel/accel.sh@21 -- # case "$var" in 00:10:26.984 16:02:56 -- accel/accel.sh@19 -- # IFS=: 00:10:26.984 16:02:56 -- accel/accel.sh@19 -- # read -r var val 00:10:26.984 16:02:56 -- accel/accel.sh@20 -- # val= 00:10:26.984 16:02:56 -- accel/accel.sh@21 -- # case "$var" in 00:10:26.984 16:02:56 -- accel/accel.sh@19 -- # IFS=: 00:10:26.984 16:02:56 -- accel/accel.sh@19 -- # read -r var val 00:10:26.984 16:02:56 -- accel/accel.sh@20 -- # val=software 00:10:26.984 16:02:56 -- accel/accel.sh@21 -- # case "$var" in 00:10:26.984 16:02:56 -- accel/accel.sh@22 -- # accel_module=software 00:10:26.984 16:02:56 -- accel/accel.sh@19 -- # IFS=: 00:10:26.984 16:02:56 -- accel/accel.sh@19 -- # read -r var val 00:10:26.984 16:02:56 -- accel/accel.sh@20 -- # val=32 00:10:26.984 16:02:56 -- accel/accel.sh@21 -- # case "$var" in 00:10:26.984 16:02:56 -- accel/accel.sh@19 -- # IFS=: 00:10:26.984 16:02:56 -- accel/accel.sh@19 -- # read -r var val 00:10:26.984 16:02:56 -- accel/accel.sh@20 -- # val=32 00:10:26.984 16:02:56 -- accel/accel.sh@21 -- # case "$var" in 00:10:26.984 16:02:56 -- accel/accel.sh@19 -- # IFS=: 00:10:26.984 16:02:56 -- accel/accel.sh@19 -- # read -r var val 00:10:26.984 16:02:56 -- accel/accel.sh@20 -- # val=1 00:10:26.984 16:02:56 -- accel/accel.sh@21 -- # case "$var" in 00:10:26.984 16:02:56 -- accel/accel.sh@19 -- # IFS=: 00:10:26.984 16:02:56 -- accel/accel.sh@19 -- # read -r var val 00:10:26.984 16:02:56 -- accel/accel.sh@20 -- # val='1 seconds' 00:10:26.984 16:02:56 -- accel/accel.sh@21 -- # case "$var" in 00:10:26.984 16:02:56 -- accel/accel.sh@19 -- # IFS=: 00:10:26.984 16:02:56 -- accel/accel.sh@19 -- # read -r var val 00:10:26.984 16:02:56 -- accel/accel.sh@20 -- # val=Yes 00:10:26.984 16:02:56 -- accel/accel.sh@21 -- # case "$var" in 00:10:26.984 16:02:56 -- accel/accel.sh@19 -- # IFS=: 00:10:26.984 16:02:56 -- accel/accel.sh@19 -- # read -r var val 00:10:26.984 16:02:56 -- accel/accel.sh@20 -- # val= 00:10:26.984 16:02:56 -- accel/accel.sh@21 -- # case "$var" in 00:10:26.984 16:02:56 -- accel/accel.sh@19 -- # IFS=: 00:10:26.984 16:02:56 -- accel/accel.sh@19 -- # read -r var val 00:10:26.984 16:02:56 -- accel/accel.sh@20 -- # val= 00:10:26.984 16:02:56 -- accel/accel.sh@21 -- # case "$var" in 00:10:26.984 16:02:56 -- accel/accel.sh@19 -- # IFS=: 00:10:26.984 16:02:56 -- accel/accel.sh@19 -- # read -r var val 00:10:28.361 16:02:58 -- accel/accel.sh@20 -- # val= 00:10:28.361 16:02:58 -- accel/accel.sh@21 -- # case "$var" in 00:10:28.361 16:02:58 -- accel/accel.sh@19 -- # IFS=: 00:10:28.361 16:02:58 -- accel/accel.sh@19 -- # read -r var val 00:10:28.361 16:02:58 -- accel/accel.sh@20 -- # val= 00:10:28.361 16:02:58 -- accel/accel.sh@21 -- # case "$var" in 00:10:28.361 16:02:58 -- accel/accel.sh@19 -- # IFS=: 00:10:28.361 16:02:58 -- accel/accel.sh@19 -- # read -r var val 00:10:28.362 16:02:58 -- accel/accel.sh@20 -- # val= 00:10:28.362 16:02:58 -- accel/accel.sh@21 -- # case "$var" in 00:10:28.362 16:02:58 -- accel/accel.sh@19 -- # IFS=: 00:10:28.362 16:02:58 -- accel/accel.sh@19 -- # read -r var val 00:10:28.362 16:02:58 -- accel/accel.sh@20 -- # val= 00:10:28.362 16:02:58 -- accel/accel.sh@21 -- # case "$var" in 00:10:28.362 16:02:58 -- accel/accel.sh@19 -- # IFS=: 00:10:28.362 16:02:58 -- accel/accel.sh@19 -- # read -r var val 00:10:28.362 16:02:58 -- accel/accel.sh@20 -- # val= 00:10:28.362 16:02:58 -- accel/accel.sh@21 -- # case "$var" in 00:10:28.362 16:02:58 -- accel/accel.sh@19 -- # IFS=: 00:10:28.362 16:02:58 -- accel/accel.sh@19 -- # read -r var val 00:10:28.362 16:02:58 -- accel/accel.sh@20 -- # val= 00:10:28.362 16:02:58 -- accel/accel.sh@21 -- # case "$var" in 00:10:28.362 16:02:58 -- accel/accel.sh@19 -- # IFS=: 00:10:28.362 16:02:58 -- accel/accel.sh@19 -- # read -r var val 00:10:28.362 16:02:58 -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:28.362 ************************************ 00:10:28.362 END TEST accel_copy 00:10:28.362 ************************************ 00:10:28.362 16:02:58 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:10:28.362 16:02:58 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:28.362 00:10:28.362 real 0m1.399s 00:10:28.362 user 0m1.187s 00:10:28.362 sys 0m0.113s 00:10:28.362 16:02:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:28.362 16:02:58 -- common/autotest_common.sh@10 -- # set +x 00:10:28.362 16:02:58 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:28.362 16:02:58 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:10:28.362 16:02:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:28.362 16:02:58 -- common/autotest_common.sh@10 -- # set +x 00:10:28.362 ************************************ 00:10:28.362 START TEST accel_fill 00:10:28.362 ************************************ 00:10:28.362 16:02:58 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:28.362 16:02:58 -- accel/accel.sh@16 -- # local accel_opc 00:10:28.362 16:02:58 -- accel/accel.sh@17 -- # local accel_module 00:10:28.362 16:02:58 -- accel/accel.sh@19 -- # IFS=: 00:10:28.362 16:02:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:28.362 16:02:58 -- accel/accel.sh@19 -- # read -r var val 00:10:28.362 16:02:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:28.362 16:02:58 -- accel/accel.sh@12 -- # build_accel_config 00:10:28.362 16:02:58 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:28.362 16:02:58 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:28.362 16:02:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:28.362 16:02:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:28.362 16:02:58 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:28.362 16:02:58 -- accel/accel.sh@40 -- # local IFS=, 00:10:28.362 16:02:58 -- accel/accel.sh@41 -- # jq -r . 00:10:28.362 [2024-04-15 16:02:58.198138] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:10:28.362 [2024-04-15 16:02:58.198438] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73756 ] 00:10:28.621 [2024-04-15 16:02:58.341035] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.621 [2024-04-15 16:02:58.386485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.621 [2024-04-15 16:02:58.387316] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:10:28.621 16:02:58 -- accel/accel.sh@20 -- # val= 00:10:28.621 16:02:58 -- accel/accel.sh@21 -- # case "$var" in 00:10:28.621 16:02:58 -- accel/accel.sh@19 -- # IFS=: 00:10:28.621 16:02:58 -- accel/accel.sh@19 -- # read -r var val 00:10:28.621 16:02:58 -- accel/accel.sh@20 -- # val= 00:10:28.621 16:02:58 -- accel/accel.sh@21 -- # case "$var" in 00:10:28.621 16:02:58 -- accel/accel.sh@19 -- # IFS=: 00:10:28.621 16:02:58 -- accel/accel.sh@19 -- # read -r var val 00:10:28.621 16:02:58 -- accel/accel.sh@20 -- # val=0x1 00:10:28.621 16:02:58 -- accel/accel.sh@21 -- # case "$var" in 00:10:28.621 16:02:58 -- accel/accel.sh@19 -- # IFS=: 00:10:28.621 16:02:58 -- accel/accel.sh@19 -- # read -r var val 00:10:28.621 16:02:58 -- accel/accel.sh@20 -- # val= 00:10:28.621 16:02:58 -- accel/accel.sh@21 -- # case "$var" in 00:10:28.621 16:02:58 -- accel/accel.sh@19 -- # IFS=: 00:10:28.621 16:02:58 -- accel/accel.sh@19 -- # read -r var val 00:10:28.621 16:02:58 -- accel/accel.sh@20 -- # val= 00:10:28.621 16:02:58 -- accel/accel.sh@21 -- # case "$var" in 00:10:28.621 16:02:58 -- accel/accel.sh@19 -- # IFS=: 00:10:28.621 16:02:58 -- accel/accel.sh@19 -- # read -r var val 00:10:28.621 16:02:58 -- accel/accel.sh@20 -- # val=fill 00:10:28.621 16:02:58 -- accel/accel.sh@21 -- # case "$var" in 00:10:28.621 16:02:58 -- accel/accel.sh@23 -- # accel_opc=fill 00:10:28.621 16:02:58 -- accel/accel.sh@19 -- # IFS=: 00:10:28.621 16:02:58 -- accel/accel.sh@19 -- # read -r var val 00:10:28.621 16:02:58 -- accel/accel.sh@20 -- # val=0x80 00:10:28.621 16:02:58 -- accel/accel.sh@21 -- # case "$var" in 00:10:28.621 16:02:58 -- accel/accel.sh@19 -- # IFS=: 00:10:28.621 16:02:58 -- accel/accel.sh@19 -- # read -r var val 00:10:28.621 16:02:58 -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:28.621 16:02:58 -- accel/accel.sh@21 -- # case "$var" in 00:10:28.621 16:02:58 -- accel/accel.sh@19 -- # IFS=: 00:10:28.621 16:02:58 -- accel/accel.sh@19 -- # read -r var val 00:10:28.621 16:02:58 -- accel/accel.sh@20 -- # val= 00:10:28.621 16:02:58 -- accel/accel.sh@21 -- # case "$var" in 00:10:28.621 16:02:58 -- accel/accel.sh@19 -- # IFS=: 00:10:28.621 16:02:58 -- accel/accel.sh@19 -- # read -r var val 00:10:28.621 16:02:58 -- accel/accel.sh@20 -- # val=software 00:10:28.621 16:02:58 -- accel/accel.sh@21 -- # case "$var" in 00:10:28.621 16:02:58 -- accel/accel.sh@22 -- # accel_module=software 00:10:28.621 16:02:58 -- accel/accel.sh@19 -- # IFS=: 00:10:28.621 16:02:58 -- accel/accel.sh@19 -- # read -r var val 00:10:28.621 16:02:58 -- accel/accel.sh@20 -- # val=64 00:10:28.621 16:02:58 -- accel/accel.sh@21 -- # case "$var" in 00:10:28.621 16:02:58 -- accel/accel.sh@19 -- # IFS=: 00:10:28.621 16:02:58 -- accel/accel.sh@19 -- # read -r var val 00:10:28.621 16:02:58 -- accel/accel.sh@20 -- # val=64 00:10:28.621 16:02:58 -- accel/accel.sh@21 -- # case "$var" in 00:10:28.621 16:02:58 -- accel/accel.sh@19 -- # IFS=: 00:10:28.621 16:02:58 -- accel/accel.sh@19 -- # read -r var val 00:10:28.621 16:02:58 -- accel/accel.sh@20 -- # val=1 00:10:28.621 16:02:58 -- accel/accel.sh@21 -- # case "$var" in 00:10:28.621 16:02:58 -- accel/accel.sh@19 -- # IFS=: 00:10:28.621 16:02:58 -- accel/accel.sh@19 -- # read -r var val 00:10:28.621 16:02:58 -- accel/accel.sh@20 -- # val='1 seconds' 00:10:28.621 16:02:58 -- accel/accel.sh@21 -- # case "$var" in 00:10:28.621 16:02:58 -- accel/accel.sh@19 -- # IFS=: 00:10:28.621 16:02:58 -- accel/accel.sh@19 -- # read -r var val 00:10:28.621 16:02:58 -- accel/accel.sh@20 -- # val=Yes 00:10:28.621 16:02:58 -- accel/accel.sh@21 -- # case "$var" in 00:10:28.621 16:02:58 -- accel/accel.sh@19 -- # IFS=: 00:10:28.621 16:02:58 -- accel/accel.sh@19 -- # read -r var val 00:10:28.621 16:02:58 -- accel/accel.sh@20 -- # val= 00:10:28.621 16:02:58 -- accel/accel.sh@21 -- # case "$var" in 00:10:28.621 16:02:58 -- accel/accel.sh@19 -- # IFS=: 00:10:28.621 16:02:58 -- accel/accel.sh@19 -- # read -r var val 00:10:28.621 16:02:58 -- accel/accel.sh@20 -- # val= 00:10:28.621 16:02:58 -- accel/accel.sh@21 -- # case "$var" in 00:10:28.621 16:02:58 -- accel/accel.sh@19 -- # IFS=: 00:10:28.621 16:02:58 -- accel/accel.sh@19 -- # read -r var val 00:10:29.620 16:02:59 -- accel/accel.sh@20 -- # val= 00:10:29.620 16:02:59 -- accel/accel.sh@21 -- # case "$var" in 00:10:29.620 16:02:59 -- accel/accel.sh@19 -- # IFS=: 00:10:29.620 16:02:59 -- accel/accel.sh@19 -- # read -r var val 00:10:29.620 16:02:59 -- accel/accel.sh@20 -- # val= 00:10:29.620 16:02:59 -- accel/accel.sh@21 -- # case "$var" in 00:10:29.620 16:02:59 -- accel/accel.sh@19 -- # IFS=: 00:10:29.620 16:02:59 -- accel/accel.sh@19 -- # read -r var val 00:10:29.620 16:02:59 -- accel/accel.sh@20 -- # val= 00:10:29.620 16:02:59 -- accel/accel.sh@21 -- # case "$var" in 00:10:29.620 16:02:59 -- accel/accel.sh@19 -- # IFS=: 00:10:29.620 16:02:59 -- accel/accel.sh@19 -- # read -r var val 00:10:29.620 16:02:59 -- accel/accel.sh@20 -- # val= 00:10:29.620 16:02:59 -- accel/accel.sh@21 -- # case "$var" in 00:10:29.620 16:02:59 -- accel/accel.sh@19 -- # IFS=: 00:10:29.620 16:02:59 -- accel/accel.sh@19 -- # read -r var val 00:10:29.620 16:02:59 -- accel/accel.sh@20 -- # val= 00:10:29.620 16:02:59 -- accel/accel.sh@21 -- # case "$var" in 00:10:29.620 16:02:59 -- accel/accel.sh@19 -- # IFS=: 00:10:29.620 16:02:59 -- accel/accel.sh@19 -- # read -r var val 00:10:29.620 16:02:59 -- accel/accel.sh@20 -- # val= 00:10:29.620 16:02:59 -- accel/accel.sh@21 -- # case "$var" in 00:10:29.620 16:02:59 -- accel/accel.sh@19 -- # IFS=: 00:10:29.620 16:02:59 -- accel/accel.sh@19 -- # read -r var val 00:10:29.620 ************************************ 00:10:29.620 END TEST accel_fill 00:10:29.620 ************************************ 00:10:29.620 16:02:59 -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:29.620 16:02:59 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:10:29.620 16:02:59 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:29.620 00:10:29.620 real 0m1.387s 00:10:29.620 user 0m1.180s 00:10:29.620 sys 0m0.108s 00:10:29.620 16:02:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:29.620 16:02:59 -- common/autotest_common.sh@10 -- # set +x 00:10:29.878 16:02:59 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:10:29.878 16:02:59 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:10:29.878 16:02:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:29.878 16:02:59 -- common/autotest_common.sh@10 -- # set +x 00:10:29.878 ************************************ 00:10:29.878 START TEST accel_copy_crc32c 00:10:29.878 ************************************ 00:10:29.878 16:02:59 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:10:29.878 16:02:59 -- accel/accel.sh@16 -- # local accel_opc 00:10:29.878 16:02:59 -- accel/accel.sh@17 -- # local accel_module 00:10:29.878 16:02:59 -- accel/accel.sh@19 -- # IFS=: 00:10:29.878 16:02:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:10:29.879 16:02:59 -- accel/accel.sh@19 -- # read -r var val 00:10:29.879 16:02:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:10:29.879 16:02:59 -- accel/accel.sh@12 -- # build_accel_config 00:10:29.879 16:02:59 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:29.879 16:02:59 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:29.879 16:02:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:29.879 16:02:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:29.879 16:02:59 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:29.879 16:02:59 -- accel/accel.sh@40 -- # local IFS=, 00:10:29.879 16:02:59 -- accel/accel.sh@41 -- # jq -r . 00:10:29.879 [2024-04-15 16:02:59.719563] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:10:29.879 [2024-04-15 16:02:59.719819] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73800 ] 00:10:30.137 [2024-04-15 16:02:59.856997] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.137 [2024-04-15 16:02:59.902508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.137 [2024-04-15 16:02:59.903404] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:10:30.137 16:02:59 -- accel/accel.sh@20 -- # val= 00:10:30.137 16:02:59 -- accel/accel.sh@21 -- # case "$var" in 00:10:30.137 16:02:59 -- accel/accel.sh@19 -- # IFS=: 00:10:30.137 16:02:59 -- accel/accel.sh@19 -- # read -r var val 00:10:30.137 16:02:59 -- accel/accel.sh@20 -- # val= 00:10:30.137 16:02:59 -- accel/accel.sh@21 -- # case "$var" in 00:10:30.137 16:02:59 -- accel/accel.sh@19 -- # IFS=: 00:10:30.137 16:02:59 -- accel/accel.sh@19 -- # read -r var val 00:10:30.137 16:02:59 -- accel/accel.sh@20 -- # val=0x1 00:10:30.137 16:02:59 -- accel/accel.sh@21 -- # case "$var" in 00:10:30.137 16:02:59 -- accel/accel.sh@19 -- # IFS=: 00:10:30.137 16:02:59 -- accel/accel.sh@19 -- # read -r var val 00:10:30.137 16:02:59 -- accel/accel.sh@20 -- # val= 00:10:30.137 16:02:59 -- accel/accel.sh@21 -- # case "$var" in 00:10:30.137 16:02:59 -- accel/accel.sh@19 -- # IFS=: 00:10:30.137 16:02:59 -- accel/accel.sh@19 -- # read -r var val 00:10:30.137 16:02:59 -- accel/accel.sh@20 -- # val= 00:10:30.137 16:02:59 -- accel/accel.sh@21 -- # case "$var" in 00:10:30.137 16:02:59 -- accel/accel.sh@19 -- # IFS=: 00:10:30.137 16:02:59 -- accel/accel.sh@19 -- # read -r var val 00:10:30.137 16:02:59 -- accel/accel.sh@20 -- # val=copy_crc32c 00:10:30.137 16:02:59 -- accel/accel.sh@21 -- # case "$var" in 00:10:30.137 16:02:59 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:10:30.137 16:02:59 -- accel/accel.sh@19 -- # IFS=: 00:10:30.137 16:02:59 -- accel/accel.sh@19 -- # read -r var val 00:10:30.137 16:02:59 -- accel/accel.sh@20 -- # val=0 00:10:30.137 16:02:59 -- accel/accel.sh@21 -- # case "$var" in 00:10:30.137 16:02:59 -- accel/accel.sh@19 -- # IFS=: 00:10:30.137 16:02:59 -- accel/accel.sh@19 -- # read -r var val 00:10:30.137 16:02:59 -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:30.137 16:02:59 -- accel/accel.sh@21 -- # case "$var" in 00:10:30.137 16:02:59 -- accel/accel.sh@19 -- # IFS=: 00:10:30.137 16:02:59 -- accel/accel.sh@19 -- # read -r var val 00:10:30.137 16:02:59 -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:30.137 16:02:59 -- accel/accel.sh@21 -- # case "$var" in 00:10:30.137 16:02:59 -- accel/accel.sh@19 -- # IFS=: 00:10:30.137 16:02:59 -- accel/accel.sh@19 -- # read -r var val 00:10:30.137 16:02:59 -- accel/accel.sh@20 -- # val= 00:10:30.137 16:02:59 -- accel/accel.sh@21 -- # case "$var" in 00:10:30.137 16:02:59 -- accel/accel.sh@19 -- # IFS=: 00:10:30.137 16:02:59 -- accel/accel.sh@19 -- # read -r var val 00:10:30.137 16:02:59 -- accel/accel.sh@20 -- # val=software 00:10:30.138 16:02:59 -- accel/accel.sh@21 -- # case "$var" in 00:10:30.138 16:02:59 -- accel/accel.sh@22 -- # accel_module=software 00:10:30.138 16:02:59 -- accel/accel.sh@19 -- # IFS=: 00:10:30.138 16:02:59 -- accel/accel.sh@19 -- # read -r var val 00:10:30.138 16:02:59 -- accel/accel.sh@20 -- # val=32 00:10:30.138 16:02:59 -- accel/accel.sh@21 -- # case "$var" in 00:10:30.138 16:02:59 -- accel/accel.sh@19 -- # IFS=: 00:10:30.138 16:02:59 -- accel/accel.sh@19 -- # read -r var val 00:10:30.138 16:02:59 -- accel/accel.sh@20 -- # val=32 00:10:30.138 16:02:59 -- accel/accel.sh@21 -- # case "$var" in 00:10:30.138 16:02:59 -- accel/accel.sh@19 -- # IFS=: 00:10:30.138 16:02:59 -- accel/accel.sh@19 -- # read -r var val 00:10:30.138 16:02:59 -- accel/accel.sh@20 -- # val=1 00:10:30.138 16:02:59 -- accel/accel.sh@21 -- # case "$var" in 00:10:30.138 16:02:59 -- accel/accel.sh@19 -- # IFS=: 00:10:30.138 16:02:59 -- accel/accel.sh@19 -- # read -r var val 00:10:30.138 16:02:59 -- accel/accel.sh@20 -- # val='1 seconds' 00:10:30.138 16:02:59 -- accel/accel.sh@21 -- # case "$var" in 00:10:30.138 16:02:59 -- accel/accel.sh@19 -- # IFS=: 00:10:30.138 16:02:59 -- accel/accel.sh@19 -- # read -r var val 00:10:30.138 16:02:59 -- accel/accel.sh@20 -- # val=Yes 00:10:30.138 16:02:59 -- accel/accel.sh@21 -- # case "$var" in 00:10:30.138 16:02:59 -- accel/accel.sh@19 -- # IFS=: 00:10:30.138 16:02:59 -- accel/accel.sh@19 -- # read -r var val 00:10:30.138 16:02:59 -- accel/accel.sh@20 -- # val= 00:10:30.138 16:02:59 -- accel/accel.sh@21 -- # case "$var" in 00:10:30.138 16:02:59 -- accel/accel.sh@19 -- # IFS=: 00:10:30.138 16:02:59 -- accel/accel.sh@19 -- # read -r var val 00:10:30.138 16:02:59 -- accel/accel.sh@20 -- # val= 00:10:30.138 16:02:59 -- accel/accel.sh@21 -- # case "$var" in 00:10:30.138 16:02:59 -- accel/accel.sh@19 -- # IFS=: 00:10:30.138 16:02:59 -- accel/accel.sh@19 -- # read -r var val 00:10:31.583 16:03:01 -- accel/accel.sh@20 -- # val= 00:10:31.583 16:03:01 -- accel/accel.sh@21 -- # case "$var" in 00:10:31.583 16:03:01 -- accel/accel.sh@19 -- # IFS=: 00:10:31.583 16:03:01 -- accel/accel.sh@19 -- # read -r var val 00:10:31.583 16:03:01 -- accel/accel.sh@20 -- # val= 00:10:31.583 16:03:01 -- accel/accel.sh@21 -- # case "$var" in 00:10:31.583 16:03:01 -- accel/accel.sh@19 -- # IFS=: 00:10:31.583 16:03:01 -- accel/accel.sh@19 -- # read -r var val 00:10:31.583 16:03:01 -- accel/accel.sh@20 -- # val= 00:10:31.583 16:03:01 -- accel/accel.sh@21 -- # case "$var" in 00:10:31.583 16:03:01 -- accel/accel.sh@19 -- # IFS=: 00:10:31.583 16:03:01 -- accel/accel.sh@19 -- # read -r var val 00:10:31.583 16:03:01 -- accel/accel.sh@20 -- # val= 00:10:31.583 16:03:01 -- accel/accel.sh@21 -- # case "$var" in 00:10:31.583 16:03:01 -- accel/accel.sh@19 -- # IFS=: 00:10:31.583 16:03:01 -- accel/accel.sh@19 -- # read -r var val 00:10:31.583 16:03:01 -- accel/accel.sh@20 -- # val= 00:10:31.583 16:03:01 -- accel/accel.sh@21 -- # case "$var" in 00:10:31.583 16:03:01 -- accel/accel.sh@19 -- # IFS=: 00:10:31.583 16:03:01 -- accel/accel.sh@19 -- # read -r var val 00:10:31.583 16:03:01 -- accel/accel.sh@20 -- # val= 00:10:31.583 16:03:01 -- accel/accel.sh@21 -- # case "$var" in 00:10:31.583 16:03:01 -- accel/accel.sh@19 -- # IFS=: 00:10:31.583 16:03:01 -- accel/accel.sh@19 -- # read -r var val 00:10:31.583 16:03:01 -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:31.583 16:03:01 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:10:31.583 16:03:01 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:31.583 00:10:31.583 real 0m1.386s 00:10:31.583 user 0m1.183s 00:10:31.583 sys 0m0.104s 00:10:31.583 16:03:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:31.583 16:03:01 -- common/autotest_common.sh@10 -- # set +x 00:10:31.583 ************************************ 00:10:31.583 END TEST accel_copy_crc32c 00:10:31.583 ************************************ 00:10:31.583 16:03:01 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:10:31.583 16:03:01 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:10:31.583 16:03:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:31.583 16:03:01 -- common/autotest_common.sh@10 -- # set +x 00:10:31.583 ************************************ 00:10:31.583 START TEST accel_copy_crc32c_C2 00:10:31.583 ************************************ 00:10:31.584 16:03:01 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:10:31.584 16:03:01 -- accel/accel.sh@16 -- # local accel_opc 00:10:31.584 16:03:01 -- accel/accel.sh@17 -- # local accel_module 00:10:31.584 16:03:01 -- accel/accel.sh@19 -- # IFS=: 00:10:31.584 16:03:01 -- accel/accel.sh@19 -- # read -r var val 00:10:31.584 16:03:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:10:31.584 16:03:01 -- accel/accel.sh@12 -- # build_accel_config 00:10:31.584 16:03:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:10:31.584 16:03:01 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:31.584 16:03:01 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:31.584 16:03:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:31.584 16:03:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:31.584 16:03:01 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:31.584 16:03:01 -- accel/accel.sh@40 -- # local IFS=, 00:10:31.584 16:03:01 -- accel/accel.sh@41 -- # jq -r . 00:10:31.584 [2024-04-15 16:03:01.253044] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:10:31.584 [2024-04-15 16:03:01.253621] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73833 ] 00:10:31.584 [2024-04-15 16:03:01.403534] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.584 [2024-04-15 16:03:01.457087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.584 [2024-04-15 16:03:01.458035] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:10:31.853 16:03:01 -- accel/accel.sh@20 -- # val= 00:10:31.853 16:03:01 -- accel/accel.sh@21 -- # case "$var" in 00:10:31.853 16:03:01 -- accel/accel.sh@19 -- # IFS=: 00:10:31.853 16:03:01 -- accel/accel.sh@19 -- # read -r var val 00:10:31.853 16:03:01 -- accel/accel.sh@20 -- # val= 00:10:31.853 16:03:01 -- accel/accel.sh@21 -- # case "$var" in 00:10:31.853 16:03:01 -- accel/accel.sh@19 -- # IFS=: 00:10:31.853 16:03:01 -- accel/accel.sh@19 -- # read -r var val 00:10:31.853 16:03:01 -- accel/accel.sh@20 -- # val=0x1 00:10:31.853 16:03:01 -- accel/accel.sh@21 -- # case "$var" in 00:10:31.853 16:03:01 -- accel/accel.sh@19 -- # IFS=: 00:10:31.853 16:03:01 -- accel/accel.sh@19 -- # read -r var val 00:10:31.853 16:03:01 -- accel/accel.sh@20 -- # val= 00:10:31.853 16:03:01 -- accel/accel.sh@21 -- # case "$var" in 00:10:31.853 16:03:01 -- accel/accel.sh@19 -- # IFS=: 00:10:31.853 16:03:01 -- accel/accel.sh@19 -- # read -r var val 00:10:31.853 16:03:01 -- accel/accel.sh@20 -- # val= 00:10:31.853 16:03:01 -- accel/accel.sh@21 -- # case "$var" in 00:10:31.853 16:03:01 -- accel/accel.sh@19 -- # IFS=: 00:10:31.853 16:03:01 -- accel/accel.sh@19 -- # read -r var val 00:10:31.853 16:03:01 -- accel/accel.sh@20 -- # val=copy_crc32c 00:10:31.853 16:03:01 -- accel/accel.sh@21 -- # case "$var" in 00:10:31.853 16:03:01 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:10:31.853 16:03:01 -- accel/accel.sh@19 -- # IFS=: 00:10:31.853 16:03:01 -- accel/accel.sh@19 -- # read -r var val 00:10:31.853 16:03:01 -- accel/accel.sh@20 -- # val=0 00:10:31.853 16:03:01 -- accel/accel.sh@21 -- # case "$var" in 00:10:31.853 16:03:01 -- accel/accel.sh@19 -- # IFS=: 00:10:31.853 16:03:01 -- accel/accel.sh@19 -- # read -r var val 00:10:31.853 16:03:01 -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:31.853 16:03:01 -- accel/accel.sh@21 -- # case "$var" in 00:10:31.853 16:03:01 -- accel/accel.sh@19 -- # IFS=: 00:10:31.853 16:03:01 -- accel/accel.sh@19 -- # read -r var val 00:10:31.853 16:03:01 -- accel/accel.sh@20 -- # val='8192 bytes' 00:10:31.853 16:03:01 -- accel/accel.sh@21 -- # case "$var" in 00:10:31.853 16:03:01 -- accel/accel.sh@19 -- # IFS=: 00:10:31.853 16:03:01 -- accel/accel.sh@19 -- # read -r var val 00:10:31.853 16:03:01 -- accel/accel.sh@20 -- # val= 00:10:31.853 16:03:01 -- accel/accel.sh@21 -- # case "$var" in 00:10:31.853 16:03:01 -- accel/accel.sh@19 -- # IFS=: 00:10:31.853 16:03:01 -- accel/accel.sh@19 -- # read -r var val 00:10:31.853 16:03:01 -- accel/accel.sh@20 -- # val=software 00:10:31.853 16:03:01 -- accel/accel.sh@21 -- # case "$var" in 00:10:31.853 16:03:01 -- accel/accel.sh@22 -- # accel_module=software 00:10:31.853 16:03:01 -- accel/accel.sh@19 -- # IFS=: 00:10:31.853 16:03:01 -- accel/accel.sh@19 -- # read -r var val 00:10:31.853 16:03:01 -- accel/accel.sh@20 -- # val=32 00:10:31.853 16:03:01 -- accel/accel.sh@21 -- # case "$var" in 00:10:31.853 16:03:01 -- accel/accel.sh@19 -- # IFS=: 00:10:31.853 16:03:01 -- accel/accel.sh@19 -- # read -r var val 00:10:31.853 16:03:01 -- accel/accel.sh@20 -- # val=32 00:10:31.853 16:03:01 -- accel/accel.sh@21 -- # case "$var" in 00:10:31.853 16:03:01 -- accel/accel.sh@19 -- # IFS=: 00:10:31.853 16:03:01 -- accel/accel.sh@19 -- # read -r var val 00:10:31.853 16:03:01 -- accel/accel.sh@20 -- # val=1 00:10:31.853 16:03:01 -- accel/accel.sh@21 -- # case "$var" in 00:10:31.853 16:03:01 -- accel/accel.sh@19 -- # IFS=: 00:10:31.853 16:03:01 -- accel/accel.sh@19 -- # read -r var val 00:10:31.853 16:03:01 -- accel/accel.sh@20 -- # val='1 seconds' 00:10:31.853 16:03:01 -- accel/accel.sh@21 -- # case "$var" in 00:10:31.853 16:03:01 -- accel/accel.sh@19 -- # IFS=: 00:10:31.853 16:03:01 -- accel/accel.sh@19 -- # read -r var val 00:10:31.853 16:03:01 -- accel/accel.sh@20 -- # val=Yes 00:10:31.853 16:03:01 -- accel/accel.sh@21 -- # case "$var" in 00:10:31.853 16:03:01 -- accel/accel.sh@19 -- # IFS=: 00:10:31.853 16:03:01 -- accel/accel.sh@19 -- # read -r var val 00:10:31.853 16:03:01 -- accel/accel.sh@20 -- # val= 00:10:31.853 16:03:01 -- accel/accel.sh@21 -- # case "$var" in 00:10:31.853 16:03:01 -- accel/accel.sh@19 -- # IFS=: 00:10:31.853 16:03:01 -- accel/accel.sh@19 -- # read -r var val 00:10:31.853 16:03:01 -- accel/accel.sh@20 -- # val= 00:10:31.853 16:03:01 -- accel/accel.sh@21 -- # case "$var" in 00:10:31.853 16:03:01 -- accel/accel.sh@19 -- # IFS=: 00:10:31.853 16:03:01 -- accel/accel.sh@19 -- # read -r var val 00:10:32.824 16:03:02 -- accel/accel.sh@20 -- # val= 00:10:32.824 16:03:02 -- accel/accel.sh@21 -- # case "$var" in 00:10:32.824 16:03:02 -- accel/accel.sh@19 -- # IFS=: 00:10:32.824 16:03:02 -- accel/accel.sh@19 -- # read -r var val 00:10:32.824 16:03:02 -- accel/accel.sh@20 -- # val= 00:10:32.824 16:03:02 -- accel/accel.sh@21 -- # case "$var" in 00:10:32.824 16:03:02 -- accel/accel.sh@19 -- # IFS=: 00:10:32.824 16:03:02 -- accel/accel.sh@19 -- # read -r var val 00:10:32.824 16:03:02 -- accel/accel.sh@20 -- # val= 00:10:32.824 16:03:02 -- accel/accel.sh@21 -- # case "$var" in 00:10:32.824 16:03:02 -- accel/accel.sh@19 -- # IFS=: 00:10:32.824 16:03:02 -- accel/accel.sh@19 -- # read -r var val 00:10:32.824 16:03:02 -- accel/accel.sh@20 -- # val= 00:10:32.824 16:03:02 -- accel/accel.sh@21 -- # case "$var" in 00:10:32.824 16:03:02 -- accel/accel.sh@19 -- # IFS=: 00:10:32.824 16:03:02 -- accel/accel.sh@19 -- # read -r var val 00:10:32.824 16:03:02 -- accel/accel.sh@20 -- # val= 00:10:32.824 16:03:02 -- accel/accel.sh@21 -- # case "$var" in 00:10:32.824 16:03:02 -- accel/accel.sh@19 -- # IFS=: 00:10:32.824 16:03:02 -- accel/accel.sh@19 -- # read -r var val 00:10:32.824 16:03:02 -- accel/accel.sh@20 -- # val= 00:10:32.824 16:03:02 -- accel/accel.sh@21 -- # case "$var" in 00:10:32.824 16:03:02 -- accel/accel.sh@19 -- # IFS=: 00:10:32.824 16:03:02 -- accel/accel.sh@19 -- # read -r var val 00:10:32.824 16:03:02 -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:32.824 16:03:02 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:10:32.824 16:03:02 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:32.824 00:10:32.824 real 0m1.426s 00:10:32.824 user 0m1.218s 00:10:32.824 sys 0m0.106s 00:10:32.824 16:03:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:32.824 16:03:02 -- common/autotest_common.sh@10 -- # set +x 00:10:32.824 ************************************ 00:10:32.824 END TEST accel_copy_crc32c_C2 00:10:32.824 ************************************ 00:10:32.824 16:03:02 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:10:32.824 16:03:02 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:10:32.824 16:03:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:32.824 16:03:02 -- common/autotest_common.sh@10 -- # set +x 00:10:33.084 ************************************ 00:10:33.084 START TEST accel_dualcast 00:10:33.084 ************************************ 00:10:33.084 16:03:02 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:10:33.084 16:03:02 -- accel/accel.sh@16 -- # local accel_opc 00:10:33.084 16:03:02 -- accel/accel.sh@17 -- # local accel_module 00:10:33.084 16:03:02 -- accel/accel.sh@19 -- # IFS=: 00:10:33.084 16:03:02 -- accel/accel.sh@19 -- # read -r var val 00:10:33.084 16:03:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:10:33.084 16:03:02 -- accel/accel.sh@12 -- # build_accel_config 00:10:33.084 16:03:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:10:33.084 16:03:02 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:33.084 16:03:02 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:33.084 16:03:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:33.084 16:03:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:33.084 16:03:02 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:33.084 16:03:02 -- accel/accel.sh@40 -- # local IFS=, 00:10:33.084 16:03:02 -- accel/accel.sh@41 -- # jq -r . 00:10:33.084 [2024-04-15 16:03:02.797821] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:10:33.084 [2024-04-15 16:03:02.798056] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73878 ] 00:10:33.084 [2024-04-15 16:03:02.942014] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.084 [2024-04-15 16:03:02.995638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.084 [2024-04-15 16:03:02.996599] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:10:33.084 16:03:03 -- accel/accel.sh@20 -- # val= 00:10:33.084 16:03:03 -- accel/accel.sh@21 -- # case "$var" in 00:10:33.084 16:03:03 -- accel/accel.sh@19 -- # IFS=: 00:10:33.084 16:03:03 -- accel/accel.sh@19 -- # read -r var val 00:10:33.084 16:03:03 -- accel/accel.sh@20 -- # val= 00:10:33.084 16:03:03 -- accel/accel.sh@21 -- # case "$var" in 00:10:33.084 16:03:03 -- accel/accel.sh@19 -- # IFS=: 00:10:33.084 16:03:03 -- accel/accel.sh@19 -- # read -r var val 00:10:33.084 16:03:03 -- accel/accel.sh@20 -- # val=0x1 00:10:33.084 16:03:03 -- accel/accel.sh@21 -- # case "$var" in 00:10:33.084 16:03:03 -- accel/accel.sh@19 -- # IFS=: 00:10:33.084 16:03:03 -- accel/accel.sh@19 -- # read -r var val 00:10:33.084 16:03:03 -- accel/accel.sh@20 -- # val= 00:10:33.084 16:03:03 -- accel/accel.sh@21 -- # case "$var" in 00:10:33.084 16:03:03 -- accel/accel.sh@19 -- # IFS=: 00:10:33.084 16:03:03 -- accel/accel.sh@19 -- # read -r var val 00:10:33.084 16:03:03 -- accel/accel.sh@20 -- # val= 00:10:33.084 16:03:03 -- accel/accel.sh@21 -- # case "$var" in 00:10:33.084 16:03:03 -- accel/accel.sh@19 -- # IFS=: 00:10:33.084 16:03:03 -- accel/accel.sh@19 -- # read -r var val 00:10:33.352 16:03:03 -- accel/accel.sh@20 -- # val=dualcast 00:10:33.352 16:03:03 -- accel/accel.sh@21 -- # case "$var" in 00:10:33.352 16:03:03 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:10:33.352 16:03:03 -- accel/accel.sh@19 -- # IFS=: 00:10:33.352 16:03:03 -- accel/accel.sh@19 -- # read -r var val 00:10:33.352 16:03:03 -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:33.352 16:03:03 -- accel/accel.sh@21 -- # case "$var" in 00:10:33.352 16:03:03 -- accel/accel.sh@19 -- # IFS=: 00:10:33.352 16:03:03 -- accel/accel.sh@19 -- # read -r var val 00:10:33.352 16:03:03 -- accel/accel.sh@20 -- # val= 00:10:33.352 16:03:03 -- accel/accel.sh@21 -- # case "$var" in 00:10:33.352 16:03:03 -- accel/accel.sh@19 -- # IFS=: 00:10:33.352 16:03:03 -- accel/accel.sh@19 -- # read -r var val 00:10:33.352 16:03:03 -- accel/accel.sh@20 -- # val=software 00:10:33.352 16:03:03 -- accel/accel.sh@21 -- # case "$var" in 00:10:33.352 16:03:03 -- accel/accel.sh@22 -- # accel_module=software 00:10:33.352 16:03:03 -- accel/accel.sh@19 -- # IFS=: 00:10:33.352 16:03:03 -- accel/accel.sh@19 -- # read -r var val 00:10:33.352 16:03:03 -- accel/accel.sh@20 -- # val=32 00:10:33.352 16:03:03 -- accel/accel.sh@21 -- # case "$var" in 00:10:33.352 16:03:03 -- accel/accel.sh@19 -- # IFS=: 00:10:33.352 16:03:03 -- accel/accel.sh@19 -- # read -r var val 00:10:33.352 16:03:03 -- accel/accel.sh@20 -- # val=32 00:10:33.352 16:03:03 -- accel/accel.sh@21 -- # case "$var" in 00:10:33.352 16:03:03 -- accel/accel.sh@19 -- # IFS=: 00:10:33.352 16:03:03 -- accel/accel.sh@19 -- # read -r var val 00:10:33.352 16:03:03 -- accel/accel.sh@20 -- # val=1 00:10:33.352 16:03:03 -- accel/accel.sh@21 -- # case "$var" in 00:10:33.352 16:03:03 -- accel/accel.sh@19 -- # IFS=: 00:10:33.352 16:03:03 -- accel/accel.sh@19 -- # read -r var val 00:10:33.352 16:03:03 -- accel/accel.sh@20 -- # val='1 seconds' 00:10:33.352 16:03:03 -- accel/accel.sh@21 -- # case "$var" in 00:10:33.352 16:03:03 -- accel/accel.sh@19 -- # IFS=: 00:10:33.352 16:03:03 -- accel/accel.sh@19 -- # read -r var val 00:10:33.352 16:03:03 -- accel/accel.sh@20 -- # val=Yes 00:10:33.352 16:03:03 -- accel/accel.sh@21 -- # case "$var" in 00:10:33.352 16:03:03 -- accel/accel.sh@19 -- # IFS=: 00:10:33.352 16:03:03 -- accel/accel.sh@19 -- # read -r var val 00:10:33.352 16:03:03 -- accel/accel.sh@20 -- # val= 00:10:33.352 16:03:03 -- accel/accel.sh@21 -- # case "$var" in 00:10:33.352 16:03:03 -- accel/accel.sh@19 -- # IFS=: 00:10:33.352 16:03:03 -- accel/accel.sh@19 -- # read -r var val 00:10:33.352 16:03:03 -- accel/accel.sh@20 -- # val= 00:10:33.352 16:03:03 -- accel/accel.sh@21 -- # case "$var" in 00:10:33.352 16:03:03 -- accel/accel.sh@19 -- # IFS=: 00:10:33.352 16:03:03 -- accel/accel.sh@19 -- # read -r var val 00:10:34.305 16:03:04 -- accel/accel.sh@20 -- # val= 00:10:34.305 16:03:04 -- accel/accel.sh@21 -- # case "$var" in 00:10:34.305 16:03:04 -- accel/accel.sh@19 -- # IFS=: 00:10:34.305 16:03:04 -- accel/accel.sh@19 -- # read -r var val 00:10:34.305 16:03:04 -- accel/accel.sh@20 -- # val= 00:10:34.305 16:03:04 -- accel/accel.sh@21 -- # case "$var" in 00:10:34.305 16:03:04 -- accel/accel.sh@19 -- # IFS=: 00:10:34.305 16:03:04 -- accel/accel.sh@19 -- # read -r var val 00:10:34.305 16:03:04 -- accel/accel.sh@20 -- # val= 00:10:34.305 16:03:04 -- accel/accel.sh@21 -- # case "$var" in 00:10:34.305 16:03:04 -- accel/accel.sh@19 -- # IFS=: 00:10:34.305 16:03:04 -- accel/accel.sh@19 -- # read -r var val 00:10:34.305 16:03:04 -- accel/accel.sh@20 -- # val= 00:10:34.305 16:03:04 -- accel/accel.sh@21 -- # case "$var" in 00:10:34.305 16:03:04 -- accel/accel.sh@19 -- # IFS=: 00:10:34.305 16:03:04 -- accel/accel.sh@19 -- # read -r var val 00:10:34.305 16:03:04 -- accel/accel.sh@20 -- # val= 00:10:34.305 16:03:04 -- accel/accel.sh@21 -- # case "$var" in 00:10:34.305 16:03:04 -- accel/accel.sh@19 -- # IFS=: 00:10:34.305 16:03:04 -- accel/accel.sh@19 -- # read -r var val 00:10:34.305 16:03:04 -- accel/accel.sh@20 -- # val= 00:10:34.305 16:03:04 -- accel/accel.sh@21 -- # case "$var" in 00:10:34.305 16:03:04 -- accel/accel.sh@19 -- # IFS=: 00:10:34.305 16:03:04 -- accel/accel.sh@19 -- # read -r var val 00:10:34.305 16:03:04 -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:34.305 16:03:04 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:10:34.305 16:03:04 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:34.305 00:10:34.305 real 0m1.411s 00:10:34.305 user 0m1.203s 00:10:34.305 sys 0m0.108s 00:10:34.305 16:03:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:34.305 16:03:04 -- common/autotest_common.sh@10 -- # set +x 00:10:34.305 ************************************ 00:10:34.305 END TEST accel_dualcast 00:10:34.305 ************************************ 00:10:34.305 16:03:04 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:10:34.305 16:03:04 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:10:34.305 16:03:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:34.305 16:03:04 -- common/autotest_common.sh@10 -- # set +x 00:10:34.562 ************************************ 00:10:34.562 START TEST accel_compare 00:10:34.562 ************************************ 00:10:34.562 16:03:04 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:10:34.562 16:03:04 -- accel/accel.sh@16 -- # local accel_opc 00:10:34.562 16:03:04 -- accel/accel.sh@17 -- # local accel_module 00:10:34.562 16:03:04 -- accel/accel.sh@19 -- # IFS=: 00:10:34.562 16:03:04 -- accel/accel.sh@19 -- # read -r var val 00:10:34.562 16:03:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:10:34.562 16:03:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:10:34.562 16:03:04 -- accel/accel.sh@12 -- # build_accel_config 00:10:34.562 16:03:04 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:34.562 16:03:04 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:34.562 16:03:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:34.562 16:03:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:34.562 16:03:04 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:34.562 16:03:04 -- accel/accel.sh@40 -- # local IFS=, 00:10:34.562 16:03:04 -- accel/accel.sh@41 -- # jq -r . 00:10:34.562 [2024-04-15 16:03:04.332726] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:10:34.562 [2024-04-15 16:03:04.333035] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73911 ] 00:10:34.562 [2024-04-15 16:03:04.476402] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.820 [2024-04-15 16:03:04.530351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.820 [2024-04-15 16:03:04.531462] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:10:34.820 16:03:04 -- accel/accel.sh@20 -- # val= 00:10:34.820 16:03:04 -- accel/accel.sh@21 -- # case "$var" in 00:10:34.820 16:03:04 -- accel/accel.sh@19 -- # IFS=: 00:10:34.820 16:03:04 -- accel/accel.sh@19 -- # read -r var val 00:10:34.820 16:03:04 -- accel/accel.sh@20 -- # val= 00:10:34.820 16:03:04 -- accel/accel.sh@21 -- # case "$var" in 00:10:34.820 16:03:04 -- accel/accel.sh@19 -- # IFS=: 00:10:34.820 16:03:04 -- accel/accel.sh@19 -- # read -r var val 00:10:34.820 16:03:04 -- accel/accel.sh@20 -- # val=0x1 00:10:34.820 16:03:04 -- accel/accel.sh@21 -- # case "$var" in 00:10:34.820 16:03:04 -- accel/accel.sh@19 -- # IFS=: 00:10:34.820 16:03:04 -- accel/accel.sh@19 -- # read -r var val 00:10:34.820 16:03:04 -- accel/accel.sh@20 -- # val= 00:10:34.820 16:03:04 -- accel/accel.sh@21 -- # case "$var" in 00:10:34.820 16:03:04 -- accel/accel.sh@19 -- # IFS=: 00:10:34.820 16:03:04 -- accel/accel.sh@19 -- # read -r var val 00:10:34.820 16:03:04 -- accel/accel.sh@20 -- # val= 00:10:34.820 16:03:04 -- accel/accel.sh@21 -- # case "$var" in 00:10:34.820 16:03:04 -- accel/accel.sh@19 -- # IFS=: 00:10:34.820 16:03:04 -- accel/accel.sh@19 -- # read -r var val 00:10:34.820 16:03:04 -- accel/accel.sh@20 -- # val=compare 00:10:34.820 16:03:04 -- accel/accel.sh@21 -- # case "$var" in 00:10:34.820 16:03:04 -- accel/accel.sh@23 -- # accel_opc=compare 00:10:34.820 16:03:04 -- accel/accel.sh@19 -- # IFS=: 00:10:34.820 16:03:04 -- accel/accel.sh@19 -- # read -r var val 00:10:34.820 16:03:04 -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:34.820 16:03:04 -- accel/accel.sh@21 -- # case "$var" in 00:10:34.820 16:03:04 -- accel/accel.sh@19 -- # IFS=: 00:10:34.820 16:03:04 -- accel/accel.sh@19 -- # read -r var val 00:10:34.820 16:03:04 -- accel/accel.sh@20 -- # val= 00:10:34.820 16:03:04 -- accel/accel.sh@21 -- # case "$var" in 00:10:34.820 16:03:04 -- accel/accel.sh@19 -- # IFS=: 00:10:34.820 16:03:04 -- accel/accel.sh@19 -- # read -r var val 00:10:34.820 16:03:04 -- accel/accel.sh@20 -- # val=software 00:10:34.820 16:03:04 -- accel/accel.sh@21 -- # case "$var" in 00:10:34.820 16:03:04 -- accel/accel.sh@22 -- # accel_module=software 00:10:34.820 16:03:04 -- accel/accel.sh@19 -- # IFS=: 00:10:34.820 16:03:04 -- accel/accel.sh@19 -- # read -r var val 00:10:34.820 16:03:04 -- accel/accel.sh@20 -- # val=32 00:10:34.820 16:03:04 -- accel/accel.sh@21 -- # case "$var" in 00:10:34.820 16:03:04 -- accel/accel.sh@19 -- # IFS=: 00:10:34.820 16:03:04 -- accel/accel.sh@19 -- # read -r var val 00:10:34.820 16:03:04 -- accel/accel.sh@20 -- # val=32 00:10:34.820 16:03:04 -- accel/accel.sh@21 -- # case "$var" in 00:10:34.820 16:03:04 -- accel/accel.sh@19 -- # IFS=: 00:10:34.820 16:03:04 -- accel/accel.sh@19 -- # read -r var val 00:10:34.820 16:03:04 -- accel/accel.sh@20 -- # val=1 00:10:34.820 16:03:04 -- accel/accel.sh@21 -- # case "$var" in 00:10:34.820 16:03:04 -- accel/accel.sh@19 -- # IFS=: 00:10:34.820 16:03:04 -- accel/accel.sh@19 -- # read -r var val 00:10:34.820 16:03:04 -- accel/accel.sh@20 -- # val='1 seconds' 00:10:34.820 16:03:04 -- accel/accel.sh@21 -- # case "$var" in 00:10:34.820 16:03:04 -- accel/accel.sh@19 -- # IFS=: 00:10:34.820 16:03:04 -- accel/accel.sh@19 -- # read -r var val 00:10:34.820 16:03:04 -- accel/accel.sh@20 -- # val=Yes 00:10:34.820 16:03:04 -- accel/accel.sh@21 -- # case "$var" in 00:10:34.820 16:03:04 -- accel/accel.sh@19 -- # IFS=: 00:10:34.820 16:03:04 -- accel/accel.sh@19 -- # read -r var val 00:10:34.820 16:03:04 -- accel/accel.sh@20 -- # val= 00:10:34.820 16:03:04 -- accel/accel.sh@21 -- # case "$var" in 00:10:34.820 16:03:04 -- accel/accel.sh@19 -- # IFS=: 00:10:34.820 16:03:04 -- accel/accel.sh@19 -- # read -r var val 00:10:34.820 16:03:04 -- accel/accel.sh@20 -- # val= 00:10:34.820 16:03:04 -- accel/accel.sh@21 -- # case "$var" in 00:10:34.820 16:03:04 -- accel/accel.sh@19 -- # IFS=: 00:10:34.820 16:03:04 -- accel/accel.sh@19 -- # read -r var val 00:10:35.809 16:03:05 -- accel/accel.sh@20 -- # val= 00:10:35.810 16:03:05 -- accel/accel.sh@21 -- # case "$var" in 00:10:35.810 16:03:05 -- accel/accel.sh@19 -- # IFS=: 00:10:35.810 16:03:05 -- accel/accel.sh@19 -- # read -r var val 00:10:35.810 16:03:05 -- accel/accel.sh@20 -- # val= 00:10:35.810 16:03:05 -- accel/accel.sh@21 -- # case "$var" in 00:10:35.810 16:03:05 -- accel/accel.sh@19 -- # IFS=: 00:10:35.810 16:03:05 -- accel/accel.sh@19 -- # read -r var val 00:10:35.810 16:03:05 -- accel/accel.sh@20 -- # val= 00:10:35.810 16:03:05 -- accel/accel.sh@21 -- # case "$var" in 00:10:35.810 16:03:05 -- accel/accel.sh@19 -- # IFS=: 00:10:35.810 16:03:05 -- accel/accel.sh@19 -- # read -r var val 00:10:35.810 16:03:05 -- accel/accel.sh@20 -- # val= 00:10:35.810 16:03:05 -- accel/accel.sh@21 -- # case "$var" in 00:10:35.810 16:03:05 -- accel/accel.sh@19 -- # IFS=: 00:10:35.810 16:03:05 -- accel/accel.sh@19 -- # read -r var val 00:10:35.810 16:03:05 -- accel/accel.sh@20 -- # val= 00:10:35.810 16:03:05 -- accel/accel.sh@21 -- # case "$var" in 00:10:35.810 16:03:05 -- accel/accel.sh@19 -- # IFS=: 00:10:35.810 16:03:05 -- accel/accel.sh@19 -- # read -r var val 00:10:35.810 ************************************ 00:10:35.810 END TEST accel_compare 00:10:35.810 ************************************ 00:10:35.810 16:03:05 -- accel/accel.sh@20 -- # val= 00:10:35.810 16:03:05 -- accel/accel.sh@21 -- # case "$var" in 00:10:35.810 16:03:05 -- accel/accel.sh@19 -- # IFS=: 00:10:35.810 16:03:05 -- accel/accel.sh@19 -- # read -r var val 00:10:35.810 16:03:05 -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:35.810 16:03:05 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:10:35.810 16:03:05 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:35.810 00:10:35.810 real 0m1.418s 00:10:35.810 user 0m1.211s 00:10:35.810 sys 0m0.111s 00:10:35.810 16:03:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:35.810 16:03:05 -- common/autotest_common.sh@10 -- # set +x 00:10:35.810 16:03:05 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:10:35.810 16:03:05 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:10:35.810 16:03:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:35.810 16:03:05 -- common/autotest_common.sh@10 -- # set +x 00:10:36.068 ************************************ 00:10:36.068 START TEST accel_xor 00:10:36.068 ************************************ 00:10:36.068 16:03:05 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:10:36.068 16:03:05 -- accel/accel.sh@16 -- # local accel_opc 00:10:36.068 16:03:05 -- accel/accel.sh@17 -- # local accel_module 00:10:36.068 16:03:05 -- accel/accel.sh@19 -- # IFS=: 00:10:36.068 16:03:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:10:36.068 16:03:05 -- accel/accel.sh@19 -- # read -r var val 00:10:36.068 16:03:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:10:36.068 16:03:05 -- accel/accel.sh@12 -- # build_accel_config 00:10:36.068 16:03:05 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:36.068 16:03:05 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:36.068 16:03:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:36.068 16:03:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:36.068 16:03:05 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:36.068 16:03:05 -- accel/accel.sh@40 -- # local IFS=, 00:10:36.068 16:03:05 -- accel/accel.sh@41 -- # jq -r . 00:10:36.068 [2024-04-15 16:03:05.856817] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:10:36.068 [2024-04-15 16:03:05.857062] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73955 ] 00:10:36.068 [2024-04-15 16:03:05.990495] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.327 [2024-04-15 16:03:06.040381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.327 [2024-04-15 16:03:06.041188] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:10:36.327 16:03:06 -- accel/accel.sh@20 -- # val= 00:10:36.327 16:03:06 -- accel/accel.sh@21 -- # case "$var" in 00:10:36.327 16:03:06 -- accel/accel.sh@19 -- # IFS=: 00:10:36.327 16:03:06 -- accel/accel.sh@19 -- # read -r var val 00:10:36.327 16:03:06 -- accel/accel.sh@20 -- # val= 00:10:36.327 16:03:06 -- accel/accel.sh@21 -- # case "$var" in 00:10:36.327 16:03:06 -- accel/accel.sh@19 -- # IFS=: 00:10:36.327 16:03:06 -- accel/accel.sh@19 -- # read -r var val 00:10:36.327 16:03:06 -- accel/accel.sh@20 -- # val=0x1 00:10:36.327 16:03:06 -- accel/accel.sh@21 -- # case "$var" in 00:10:36.327 16:03:06 -- accel/accel.sh@19 -- # IFS=: 00:10:36.327 16:03:06 -- accel/accel.sh@19 -- # read -r var val 00:10:36.327 16:03:06 -- accel/accel.sh@20 -- # val= 00:10:36.327 16:03:06 -- accel/accel.sh@21 -- # case "$var" in 00:10:36.327 16:03:06 -- accel/accel.sh@19 -- # IFS=: 00:10:36.327 16:03:06 -- accel/accel.sh@19 -- # read -r var val 00:10:36.327 16:03:06 -- accel/accel.sh@20 -- # val= 00:10:36.327 16:03:06 -- accel/accel.sh@21 -- # case "$var" in 00:10:36.327 16:03:06 -- accel/accel.sh@19 -- # IFS=: 00:10:36.327 16:03:06 -- accel/accel.sh@19 -- # read -r var val 00:10:36.327 16:03:06 -- accel/accel.sh@20 -- # val=xor 00:10:36.327 16:03:06 -- accel/accel.sh@21 -- # case "$var" in 00:10:36.327 16:03:06 -- accel/accel.sh@23 -- # accel_opc=xor 00:10:36.327 16:03:06 -- accel/accel.sh@19 -- # IFS=: 00:10:36.327 16:03:06 -- accel/accel.sh@19 -- # read -r var val 00:10:36.327 16:03:06 -- accel/accel.sh@20 -- # val=2 00:10:36.327 16:03:06 -- accel/accel.sh@21 -- # case "$var" in 00:10:36.327 16:03:06 -- accel/accel.sh@19 -- # IFS=: 00:10:36.327 16:03:06 -- accel/accel.sh@19 -- # read -r var val 00:10:36.327 16:03:06 -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:36.327 16:03:06 -- accel/accel.sh@21 -- # case "$var" in 00:10:36.327 16:03:06 -- accel/accel.sh@19 -- # IFS=: 00:10:36.327 16:03:06 -- accel/accel.sh@19 -- # read -r var val 00:10:36.327 16:03:06 -- accel/accel.sh@20 -- # val= 00:10:36.327 16:03:06 -- accel/accel.sh@21 -- # case "$var" in 00:10:36.327 16:03:06 -- accel/accel.sh@19 -- # IFS=: 00:10:36.327 16:03:06 -- accel/accel.sh@19 -- # read -r var val 00:10:36.327 16:03:06 -- accel/accel.sh@20 -- # val=software 00:10:36.327 16:03:06 -- accel/accel.sh@21 -- # case "$var" in 00:10:36.327 16:03:06 -- accel/accel.sh@22 -- # accel_module=software 00:10:36.327 16:03:06 -- accel/accel.sh@19 -- # IFS=: 00:10:36.327 16:03:06 -- accel/accel.sh@19 -- # read -r var val 00:10:36.327 16:03:06 -- accel/accel.sh@20 -- # val=32 00:10:36.327 16:03:06 -- accel/accel.sh@21 -- # case "$var" in 00:10:36.327 16:03:06 -- accel/accel.sh@19 -- # IFS=: 00:10:36.327 16:03:06 -- accel/accel.sh@19 -- # read -r var val 00:10:36.327 16:03:06 -- accel/accel.sh@20 -- # val=32 00:10:36.327 16:03:06 -- accel/accel.sh@21 -- # case "$var" in 00:10:36.327 16:03:06 -- accel/accel.sh@19 -- # IFS=: 00:10:36.328 16:03:06 -- accel/accel.sh@19 -- # read -r var val 00:10:36.328 16:03:06 -- accel/accel.sh@20 -- # val=1 00:10:36.328 16:03:06 -- accel/accel.sh@21 -- # case "$var" in 00:10:36.328 16:03:06 -- accel/accel.sh@19 -- # IFS=: 00:10:36.328 16:03:06 -- accel/accel.sh@19 -- # read -r var val 00:10:36.328 16:03:06 -- accel/accel.sh@20 -- # val='1 seconds' 00:10:36.328 16:03:06 -- accel/accel.sh@21 -- # case "$var" in 00:10:36.328 16:03:06 -- accel/accel.sh@19 -- # IFS=: 00:10:36.328 16:03:06 -- accel/accel.sh@19 -- # read -r var val 00:10:36.328 16:03:06 -- accel/accel.sh@20 -- # val=Yes 00:10:36.328 16:03:06 -- accel/accel.sh@21 -- # case "$var" in 00:10:36.328 16:03:06 -- accel/accel.sh@19 -- # IFS=: 00:10:36.328 16:03:06 -- accel/accel.sh@19 -- # read -r var val 00:10:36.328 16:03:06 -- accel/accel.sh@20 -- # val= 00:10:36.328 16:03:06 -- accel/accel.sh@21 -- # case "$var" in 00:10:36.328 16:03:06 -- accel/accel.sh@19 -- # IFS=: 00:10:36.328 16:03:06 -- accel/accel.sh@19 -- # read -r var val 00:10:36.328 16:03:06 -- accel/accel.sh@20 -- # val= 00:10:36.328 16:03:06 -- accel/accel.sh@21 -- # case "$var" in 00:10:36.328 16:03:06 -- accel/accel.sh@19 -- # IFS=: 00:10:36.328 16:03:06 -- accel/accel.sh@19 -- # read -r var val 00:10:37.261 16:03:07 -- accel/accel.sh@20 -- # val= 00:10:37.261 16:03:07 -- accel/accel.sh@21 -- # case "$var" in 00:10:37.261 16:03:07 -- accel/accel.sh@19 -- # IFS=: 00:10:37.261 16:03:07 -- accel/accel.sh@19 -- # read -r var val 00:10:37.261 16:03:07 -- accel/accel.sh@20 -- # val= 00:10:37.261 16:03:07 -- accel/accel.sh@21 -- # case "$var" in 00:10:37.261 16:03:07 -- accel/accel.sh@19 -- # IFS=: 00:10:37.261 16:03:07 -- accel/accel.sh@19 -- # read -r var val 00:10:37.261 16:03:07 -- accel/accel.sh@20 -- # val= 00:10:37.261 16:03:07 -- accel/accel.sh@21 -- # case "$var" in 00:10:37.261 16:03:07 -- accel/accel.sh@19 -- # IFS=: 00:10:37.261 16:03:07 -- accel/accel.sh@19 -- # read -r var val 00:10:37.261 16:03:07 -- accel/accel.sh@20 -- # val= 00:10:37.261 16:03:07 -- accel/accel.sh@21 -- # case "$var" in 00:10:37.261 16:03:07 -- accel/accel.sh@19 -- # IFS=: 00:10:37.261 16:03:07 -- accel/accel.sh@19 -- # read -r var val 00:10:37.261 16:03:07 -- accel/accel.sh@20 -- # val= 00:10:37.261 16:03:07 -- accel/accel.sh@21 -- # case "$var" in 00:10:37.261 16:03:07 -- accel/accel.sh@19 -- # IFS=: 00:10:37.261 16:03:07 -- accel/accel.sh@19 -- # read -r var val 00:10:37.261 16:03:07 -- accel/accel.sh@20 -- # val= 00:10:37.261 16:03:07 -- accel/accel.sh@21 -- # case "$var" in 00:10:37.261 16:03:07 -- accel/accel.sh@19 -- # IFS=: 00:10:37.261 16:03:07 -- accel/accel.sh@19 -- # read -r var val 00:10:37.261 16:03:07 -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:37.261 16:03:07 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:10:37.261 16:03:07 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:37.261 00:10:37.261 real 0m1.392s 00:10:37.261 user 0m1.191s 00:10:37.261 sys 0m0.105s 00:10:37.261 16:03:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:37.261 16:03:07 -- common/autotest_common.sh@10 -- # set +x 00:10:37.261 ************************************ 00:10:37.261 END TEST accel_xor 00:10:37.261 ************************************ 00:10:37.519 16:03:07 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:10:37.519 16:03:07 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:10:37.519 16:03:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:37.519 16:03:07 -- common/autotest_common.sh@10 -- # set +x 00:10:37.519 ************************************ 00:10:37.519 START TEST accel_xor 00:10:37.519 ************************************ 00:10:37.519 16:03:07 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:10:37.519 16:03:07 -- accel/accel.sh@16 -- # local accel_opc 00:10:37.519 16:03:07 -- accel/accel.sh@17 -- # local accel_module 00:10:37.519 16:03:07 -- accel/accel.sh@19 -- # IFS=: 00:10:37.519 16:03:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:10:37.519 16:03:07 -- accel/accel.sh@19 -- # read -r var val 00:10:37.519 16:03:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:10:37.519 16:03:07 -- accel/accel.sh@12 -- # build_accel_config 00:10:37.519 16:03:07 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:37.519 16:03:07 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:37.519 16:03:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:37.519 16:03:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:37.519 16:03:07 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:37.519 16:03:07 -- accel/accel.sh@40 -- # local IFS=, 00:10:37.519 16:03:07 -- accel/accel.sh@41 -- # jq -r . 00:10:37.519 [2024-04-15 16:03:07.364655] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:10:37.519 [2024-04-15 16:03:07.364873] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73988 ] 00:10:37.777 [2024-04-15 16:03:07.508247] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.777 [2024-04-15 16:03:07.555953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.777 [2024-04-15 16:03:07.556884] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:10:37.777 16:03:07 -- accel/accel.sh@20 -- # val= 00:10:37.777 16:03:07 -- accel/accel.sh@21 -- # case "$var" in 00:10:37.777 16:03:07 -- accel/accel.sh@19 -- # IFS=: 00:10:37.777 16:03:07 -- accel/accel.sh@19 -- # read -r var val 00:10:37.777 16:03:07 -- accel/accel.sh@20 -- # val= 00:10:37.777 16:03:07 -- accel/accel.sh@21 -- # case "$var" in 00:10:37.777 16:03:07 -- accel/accel.sh@19 -- # IFS=: 00:10:37.777 16:03:07 -- accel/accel.sh@19 -- # read -r var val 00:10:37.777 16:03:07 -- accel/accel.sh@20 -- # val=0x1 00:10:37.777 16:03:07 -- accel/accel.sh@21 -- # case "$var" in 00:10:37.777 16:03:07 -- accel/accel.sh@19 -- # IFS=: 00:10:37.777 16:03:07 -- accel/accel.sh@19 -- # read -r var val 00:10:37.777 16:03:07 -- accel/accel.sh@20 -- # val= 00:10:37.777 16:03:07 -- accel/accel.sh@21 -- # case "$var" in 00:10:37.777 16:03:07 -- accel/accel.sh@19 -- # IFS=: 00:10:37.777 16:03:07 -- accel/accel.sh@19 -- # read -r var val 00:10:37.777 16:03:07 -- accel/accel.sh@20 -- # val= 00:10:37.777 16:03:07 -- accel/accel.sh@21 -- # case "$var" in 00:10:37.777 16:03:07 -- accel/accel.sh@19 -- # IFS=: 00:10:37.777 16:03:07 -- accel/accel.sh@19 -- # read -r var val 00:10:37.777 16:03:07 -- accel/accel.sh@20 -- # val=xor 00:10:37.777 16:03:07 -- accel/accel.sh@21 -- # case "$var" in 00:10:37.777 16:03:07 -- accel/accel.sh@23 -- # accel_opc=xor 00:10:37.777 16:03:07 -- accel/accel.sh@19 -- # IFS=: 00:10:37.777 16:03:07 -- accel/accel.sh@19 -- # read -r var val 00:10:37.777 16:03:07 -- accel/accel.sh@20 -- # val=3 00:10:37.777 16:03:07 -- accel/accel.sh@21 -- # case "$var" in 00:10:37.777 16:03:07 -- accel/accel.sh@19 -- # IFS=: 00:10:37.777 16:03:07 -- accel/accel.sh@19 -- # read -r var val 00:10:37.777 16:03:07 -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:37.777 16:03:07 -- accel/accel.sh@21 -- # case "$var" in 00:10:37.777 16:03:07 -- accel/accel.sh@19 -- # IFS=: 00:10:37.777 16:03:07 -- accel/accel.sh@19 -- # read -r var val 00:10:37.777 16:03:07 -- accel/accel.sh@20 -- # val= 00:10:37.777 16:03:07 -- accel/accel.sh@21 -- # case "$var" in 00:10:37.777 16:03:07 -- accel/accel.sh@19 -- # IFS=: 00:10:37.777 16:03:07 -- accel/accel.sh@19 -- # read -r var val 00:10:37.777 16:03:07 -- accel/accel.sh@20 -- # val=software 00:10:37.777 16:03:07 -- accel/accel.sh@21 -- # case "$var" in 00:10:37.777 16:03:07 -- accel/accel.sh@22 -- # accel_module=software 00:10:37.777 16:03:07 -- accel/accel.sh@19 -- # IFS=: 00:10:37.777 16:03:07 -- accel/accel.sh@19 -- # read -r var val 00:10:37.777 16:03:07 -- accel/accel.sh@20 -- # val=32 00:10:37.777 16:03:07 -- accel/accel.sh@21 -- # case "$var" in 00:10:37.777 16:03:07 -- accel/accel.sh@19 -- # IFS=: 00:10:37.777 16:03:07 -- accel/accel.sh@19 -- # read -r var val 00:10:37.777 16:03:07 -- accel/accel.sh@20 -- # val=32 00:10:37.777 16:03:07 -- accel/accel.sh@21 -- # case "$var" in 00:10:37.777 16:03:07 -- accel/accel.sh@19 -- # IFS=: 00:10:37.777 16:03:07 -- accel/accel.sh@19 -- # read -r var val 00:10:37.777 16:03:07 -- accel/accel.sh@20 -- # val=1 00:10:37.777 16:03:07 -- accel/accel.sh@21 -- # case "$var" in 00:10:37.777 16:03:07 -- accel/accel.sh@19 -- # IFS=: 00:10:37.777 16:03:07 -- accel/accel.sh@19 -- # read -r var val 00:10:37.777 16:03:07 -- accel/accel.sh@20 -- # val='1 seconds' 00:10:37.777 16:03:07 -- accel/accel.sh@21 -- # case "$var" in 00:10:37.777 16:03:07 -- accel/accel.sh@19 -- # IFS=: 00:10:37.777 16:03:07 -- accel/accel.sh@19 -- # read -r var val 00:10:37.777 16:03:07 -- accel/accel.sh@20 -- # val=Yes 00:10:37.777 16:03:07 -- accel/accel.sh@21 -- # case "$var" in 00:10:37.777 16:03:07 -- accel/accel.sh@19 -- # IFS=: 00:10:37.777 16:03:07 -- accel/accel.sh@19 -- # read -r var val 00:10:37.777 16:03:07 -- accel/accel.sh@20 -- # val= 00:10:37.777 16:03:07 -- accel/accel.sh@21 -- # case "$var" in 00:10:37.777 16:03:07 -- accel/accel.sh@19 -- # IFS=: 00:10:37.777 16:03:07 -- accel/accel.sh@19 -- # read -r var val 00:10:37.777 16:03:07 -- accel/accel.sh@20 -- # val= 00:10:37.777 16:03:07 -- accel/accel.sh@21 -- # case "$var" in 00:10:37.777 16:03:07 -- accel/accel.sh@19 -- # IFS=: 00:10:37.777 16:03:07 -- accel/accel.sh@19 -- # read -r var val 00:10:39.209 16:03:08 -- accel/accel.sh@20 -- # val= 00:10:39.209 16:03:08 -- accel/accel.sh@21 -- # case "$var" in 00:10:39.209 16:03:08 -- accel/accel.sh@19 -- # IFS=: 00:10:39.209 16:03:08 -- accel/accel.sh@19 -- # read -r var val 00:10:39.209 16:03:08 -- accel/accel.sh@20 -- # val= 00:10:39.209 16:03:08 -- accel/accel.sh@21 -- # case "$var" in 00:10:39.209 16:03:08 -- accel/accel.sh@19 -- # IFS=: 00:10:39.209 16:03:08 -- accel/accel.sh@19 -- # read -r var val 00:10:39.209 16:03:08 -- accel/accel.sh@20 -- # val= 00:10:39.209 16:03:08 -- accel/accel.sh@21 -- # case "$var" in 00:10:39.209 16:03:08 -- accel/accel.sh@19 -- # IFS=: 00:10:39.209 16:03:08 -- accel/accel.sh@19 -- # read -r var val 00:10:39.209 16:03:08 -- accel/accel.sh@20 -- # val= 00:10:39.209 16:03:08 -- accel/accel.sh@21 -- # case "$var" in 00:10:39.209 16:03:08 -- accel/accel.sh@19 -- # IFS=: 00:10:39.209 16:03:08 -- accel/accel.sh@19 -- # read -r var val 00:10:39.209 16:03:08 -- accel/accel.sh@20 -- # val= 00:10:39.209 16:03:08 -- accel/accel.sh@21 -- # case "$var" in 00:10:39.209 16:03:08 -- accel/accel.sh@19 -- # IFS=: 00:10:39.209 16:03:08 -- accel/accel.sh@19 -- # read -r var val 00:10:39.209 16:03:08 -- accel/accel.sh@20 -- # val= 00:10:39.209 16:03:08 -- accel/accel.sh@21 -- # case "$var" in 00:10:39.209 16:03:08 -- accel/accel.sh@19 -- # IFS=: 00:10:39.209 16:03:08 -- accel/accel.sh@19 -- # read -r var val 00:10:39.209 16:03:08 -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:39.209 16:03:08 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:10:39.209 16:03:08 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:39.209 00:10:39.209 real 0m1.407s 00:10:39.209 user 0m1.207s 00:10:39.209 sys 0m0.103s 00:10:39.209 16:03:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:39.209 16:03:08 -- common/autotest_common.sh@10 -- # set +x 00:10:39.209 ************************************ 00:10:39.209 END TEST accel_xor 00:10:39.209 ************************************ 00:10:39.209 16:03:08 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:10:39.209 16:03:08 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:10:39.209 16:03:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:39.209 16:03:08 -- common/autotest_common.sh@10 -- # set +x 00:10:39.209 ************************************ 00:10:39.209 START TEST accel_dif_verify 00:10:39.209 ************************************ 00:10:39.209 16:03:08 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:10:39.209 16:03:08 -- accel/accel.sh@16 -- # local accel_opc 00:10:39.209 16:03:08 -- accel/accel.sh@17 -- # local accel_module 00:10:39.209 16:03:08 -- accel/accel.sh@19 -- # IFS=: 00:10:39.209 16:03:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:10:39.209 16:03:08 -- accel/accel.sh@19 -- # read -r var val 00:10:39.209 16:03:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:10:39.209 16:03:08 -- accel/accel.sh@12 -- # build_accel_config 00:10:39.209 16:03:08 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:39.209 16:03:08 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:39.209 16:03:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:39.209 16:03:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:39.209 16:03:08 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:39.209 16:03:08 -- accel/accel.sh@40 -- # local IFS=, 00:10:39.209 16:03:08 -- accel/accel.sh@41 -- # jq -r . 00:10:39.209 [2024-04-15 16:03:08.888371] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:10:39.209 [2024-04-15 16:03:08.888779] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74034 ] 00:10:39.209 [2024-04-15 16:03:09.038968] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.209 [2024-04-15 16:03:09.086503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.209 [2024-04-15 16:03:09.087369] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:10:39.209 16:03:09 -- accel/accel.sh@20 -- # val= 00:10:39.209 16:03:09 -- accel/accel.sh@21 -- # case "$var" in 00:10:39.209 16:03:09 -- accel/accel.sh@19 -- # IFS=: 00:10:39.209 16:03:09 -- accel/accel.sh@19 -- # read -r var val 00:10:39.209 16:03:09 -- accel/accel.sh@20 -- # val= 00:10:39.209 16:03:09 -- accel/accel.sh@21 -- # case "$var" in 00:10:39.209 16:03:09 -- accel/accel.sh@19 -- # IFS=: 00:10:39.209 16:03:09 -- accel/accel.sh@19 -- # read -r var val 00:10:39.209 16:03:09 -- accel/accel.sh@20 -- # val=0x1 00:10:39.209 16:03:09 -- accel/accel.sh@21 -- # case "$var" in 00:10:39.209 16:03:09 -- accel/accel.sh@19 -- # IFS=: 00:10:39.209 16:03:09 -- accel/accel.sh@19 -- # read -r var val 00:10:39.209 16:03:09 -- accel/accel.sh@20 -- # val= 00:10:39.209 16:03:09 -- accel/accel.sh@21 -- # case "$var" in 00:10:39.209 16:03:09 -- accel/accel.sh@19 -- # IFS=: 00:10:39.209 16:03:09 -- accel/accel.sh@19 -- # read -r var val 00:10:39.209 16:03:09 -- accel/accel.sh@20 -- # val= 00:10:39.209 16:03:09 -- accel/accel.sh@21 -- # case "$var" in 00:10:39.209 16:03:09 -- accel/accel.sh@19 -- # IFS=: 00:10:39.209 16:03:09 -- accel/accel.sh@19 -- # read -r var val 00:10:39.209 16:03:09 -- accel/accel.sh@20 -- # val=dif_verify 00:10:39.209 16:03:09 -- accel/accel.sh@21 -- # case "$var" in 00:10:39.209 16:03:09 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:10:39.209 16:03:09 -- accel/accel.sh@19 -- # IFS=: 00:10:39.209 16:03:09 -- accel/accel.sh@19 -- # read -r var val 00:10:39.209 16:03:09 -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:39.209 16:03:09 -- accel/accel.sh@21 -- # case "$var" in 00:10:39.209 16:03:09 -- accel/accel.sh@19 -- # IFS=: 00:10:39.209 16:03:09 -- accel/accel.sh@19 -- # read -r var val 00:10:39.209 16:03:09 -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:39.209 16:03:09 -- accel/accel.sh@21 -- # case "$var" in 00:10:39.209 16:03:09 -- accel/accel.sh@19 -- # IFS=: 00:10:39.209 16:03:09 -- accel/accel.sh@19 -- # read -r var val 00:10:39.209 16:03:09 -- accel/accel.sh@20 -- # val='512 bytes' 00:10:39.209 16:03:09 -- accel/accel.sh@21 -- # case "$var" in 00:10:39.209 16:03:09 -- accel/accel.sh@19 -- # IFS=: 00:10:39.209 16:03:09 -- accel/accel.sh@19 -- # read -r var val 00:10:39.209 16:03:09 -- accel/accel.sh@20 -- # val='8 bytes' 00:10:39.209 16:03:09 -- accel/accel.sh@21 -- # case "$var" in 00:10:39.209 16:03:09 -- accel/accel.sh@19 -- # IFS=: 00:10:39.209 16:03:09 -- accel/accel.sh@19 -- # read -r var val 00:10:39.209 16:03:09 -- accel/accel.sh@20 -- # val= 00:10:39.209 16:03:09 -- accel/accel.sh@21 -- # case "$var" in 00:10:39.209 16:03:09 -- accel/accel.sh@19 -- # IFS=: 00:10:39.209 16:03:09 -- accel/accel.sh@19 -- # read -r var val 00:10:39.209 16:03:09 -- accel/accel.sh@20 -- # val=software 00:10:39.209 16:03:09 -- accel/accel.sh@21 -- # case "$var" in 00:10:39.209 16:03:09 -- accel/accel.sh@22 -- # accel_module=software 00:10:39.209 16:03:09 -- accel/accel.sh@19 -- # IFS=: 00:10:39.209 16:03:09 -- accel/accel.sh@19 -- # read -r var val 00:10:39.209 16:03:09 -- accel/accel.sh@20 -- # val=32 00:10:39.209 16:03:09 -- accel/accel.sh@21 -- # case "$var" in 00:10:39.209 16:03:09 -- accel/accel.sh@19 -- # IFS=: 00:10:39.209 16:03:09 -- accel/accel.sh@19 -- # read -r var val 00:10:39.209 16:03:09 -- accel/accel.sh@20 -- # val=32 00:10:39.209 16:03:09 -- accel/accel.sh@21 -- # case "$var" in 00:10:39.209 16:03:09 -- accel/accel.sh@19 -- # IFS=: 00:10:39.209 16:03:09 -- accel/accel.sh@19 -- # read -r var val 00:10:39.209 16:03:09 -- accel/accel.sh@20 -- # val=1 00:10:39.210 16:03:09 -- accel/accel.sh@21 -- # case "$var" in 00:10:39.210 16:03:09 -- accel/accel.sh@19 -- # IFS=: 00:10:39.210 16:03:09 -- accel/accel.sh@19 -- # read -r var val 00:10:39.210 16:03:09 -- accel/accel.sh@20 -- # val='1 seconds' 00:10:39.210 16:03:09 -- accel/accel.sh@21 -- # case "$var" in 00:10:39.210 16:03:09 -- accel/accel.sh@19 -- # IFS=: 00:10:39.210 16:03:09 -- accel/accel.sh@19 -- # read -r var val 00:10:39.210 16:03:09 -- accel/accel.sh@20 -- # val=No 00:10:39.210 16:03:09 -- accel/accel.sh@21 -- # case "$var" in 00:10:39.210 16:03:09 -- accel/accel.sh@19 -- # IFS=: 00:10:39.210 16:03:09 -- accel/accel.sh@19 -- # read -r var val 00:10:39.210 16:03:09 -- accel/accel.sh@20 -- # val= 00:10:39.210 16:03:09 -- accel/accel.sh@21 -- # case "$var" in 00:10:39.210 16:03:09 -- accel/accel.sh@19 -- # IFS=: 00:10:39.210 16:03:09 -- accel/accel.sh@19 -- # read -r var val 00:10:39.210 16:03:09 -- accel/accel.sh@20 -- # val= 00:10:39.210 16:03:09 -- accel/accel.sh@21 -- # case "$var" in 00:10:39.210 16:03:09 -- accel/accel.sh@19 -- # IFS=: 00:10:39.210 16:03:09 -- accel/accel.sh@19 -- # read -r var val 00:10:40.582 16:03:10 -- accel/accel.sh@20 -- # val= 00:10:40.582 16:03:10 -- accel/accel.sh@21 -- # case "$var" in 00:10:40.582 16:03:10 -- accel/accel.sh@19 -- # IFS=: 00:10:40.582 16:03:10 -- accel/accel.sh@19 -- # read -r var val 00:10:40.582 16:03:10 -- accel/accel.sh@20 -- # val= 00:10:40.582 16:03:10 -- accel/accel.sh@21 -- # case "$var" in 00:10:40.582 16:03:10 -- accel/accel.sh@19 -- # IFS=: 00:10:40.582 16:03:10 -- accel/accel.sh@19 -- # read -r var val 00:10:40.582 16:03:10 -- accel/accel.sh@20 -- # val= 00:10:40.582 16:03:10 -- accel/accel.sh@21 -- # case "$var" in 00:10:40.582 16:03:10 -- accel/accel.sh@19 -- # IFS=: 00:10:40.582 16:03:10 -- accel/accel.sh@19 -- # read -r var val 00:10:40.582 16:03:10 -- accel/accel.sh@20 -- # val= 00:10:40.582 16:03:10 -- accel/accel.sh@21 -- # case "$var" in 00:10:40.582 16:03:10 -- accel/accel.sh@19 -- # IFS=: 00:10:40.582 16:03:10 -- accel/accel.sh@19 -- # read -r var val 00:10:40.582 16:03:10 -- accel/accel.sh@20 -- # val= 00:10:40.582 16:03:10 -- accel/accel.sh@21 -- # case "$var" in 00:10:40.582 16:03:10 -- accel/accel.sh@19 -- # IFS=: 00:10:40.582 16:03:10 -- accel/accel.sh@19 -- # read -r var val 00:10:40.582 16:03:10 -- accel/accel.sh@20 -- # val= 00:10:40.582 16:03:10 -- accel/accel.sh@21 -- # case "$var" in 00:10:40.582 16:03:10 -- accel/accel.sh@19 -- # IFS=: 00:10:40.582 16:03:10 -- accel/accel.sh@19 -- # read -r var val 00:10:40.582 16:03:10 -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:40.582 16:03:10 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:10:40.582 16:03:10 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:40.582 00:10:40.582 real 0m1.419s 00:10:40.582 user 0m1.214s 00:10:40.582 sys 0m0.118s 00:10:40.582 16:03:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:40.582 16:03:10 -- common/autotest_common.sh@10 -- # set +x 00:10:40.582 ************************************ 00:10:40.582 END TEST accel_dif_verify 00:10:40.582 ************************************ 00:10:40.582 16:03:10 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:10:40.582 16:03:10 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:10:40.582 16:03:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:40.582 16:03:10 -- common/autotest_common.sh@10 -- # set +x 00:10:40.582 ************************************ 00:10:40.582 START TEST accel_dif_generate 00:10:40.582 ************************************ 00:10:40.582 16:03:10 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:10:40.582 16:03:10 -- accel/accel.sh@16 -- # local accel_opc 00:10:40.582 16:03:10 -- accel/accel.sh@17 -- # local accel_module 00:10:40.582 16:03:10 -- accel/accel.sh@19 -- # IFS=: 00:10:40.582 16:03:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:10:40.582 16:03:10 -- accel/accel.sh@19 -- # read -r var val 00:10:40.582 16:03:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:10:40.582 16:03:10 -- accel/accel.sh@12 -- # build_accel_config 00:10:40.582 16:03:10 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:40.582 16:03:10 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:40.582 16:03:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:40.582 16:03:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:40.582 16:03:10 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:40.582 16:03:10 -- accel/accel.sh@40 -- # local IFS=, 00:10:40.582 16:03:10 -- accel/accel.sh@41 -- # jq -r . 00:10:40.582 [2024-04-15 16:03:10.433056] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:10:40.582 [2024-04-15 16:03:10.433293] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74067 ] 00:10:40.841 [2024-04-15 16:03:10.575382] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.841 [2024-04-15 16:03:10.641779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.841 [2024-04-15 16:03:10.642826] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:10:40.841 16:03:10 -- accel/accel.sh@20 -- # val= 00:10:40.841 16:03:10 -- accel/accel.sh@21 -- # case "$var" in 00:10:40.841 16:03:10 -- accel/accel.sh@19 -- # IFS=: 00:10:40.841 16:03:10 -- accel/accel.sh@19 -- # read -r var val 00:10:40.841 16:03:10 -- accel/accel.sh@20 -- # val= 00:10:40.841 16:03:10 -- accel/accel.sh@21 -- # case "$var" in 00:10:40.841 16:03:10 -- accel/accel.sh@19 -- # IFS=: 00:10:40.841 16:03:10 -- accel/accel.sh@19 -- # read -r var val 00:10:40.841 16:03:10 -- accel/accel.sh@20 -- # val=0x1 00:10:40.841 16:03:10 -- accel/accel.sh@21 -- # case "$var" in 00:10:40.841 16:03:10 -- accel/accel.sh@19 -- # IFS=: 00:10:40.841 16:03:10 -- accel/accel.sh@19 -- # read -r var val 00:10:40.841 16:03:10 -- accel/accel.sh@20 -- # val= 00:10:40.841 16:03:10 -- accel/accel.sh@21 -- # case "$var" in 00:10:40.841 16:03:10 -- accel/accel.sh@19 -- # IFS=: 00:10:40.841 16:03:10 -- accel/accel.sh@19 -- # read -r var val 00:10:40.841 16:03:10 -- accel/accel.sh@20 -- # val= 00:10:40.841 16:03:10 -- accel/accel.sh@21 -- # case "$var" in 00:10:40.841 16:03:10 -- accel/accel.sh@19 -- # IFS=: 00:10:40.841 16:03:10 -- accel/accel.sh@19 -- # read -r var val 00:10:40.841 16:03:10 -- accel/accel.sh@20 -- # val=dif_generate 00:10:40.841 16:03:10 -- accel/accel.sh@21 -- # case "$var" in 00:10:40.841 16:03:10 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:10:40.841 16:03:10 -- accel/accel.sh@19 -- # IFS=: 00:10:40.841 16:03:10 -- accel/accel.sh@19 -- # read -r var val 00:10:40.841 16:03:10 -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:40.841 16:03:10 -- accel/accel.sh@21 -- # case "$var" in 00:10:40.841 16:03:10 -- accel/accel.sh@19 -- # IFS=: 00:10:40.841 16:03:10 -- accel/accel.sh@19 -- # read -r var val 00:10:40.841 16:03:10 -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:40.841 16:03:10 -- accel/accel.sh@21 -- # case "$var" in 00:10:40.841 16:03:10 -- accel/accel.sh@19 -- # IFS=: 00:10:40.841 16:03:10 -- accel/accel.sh@19 -- # read -r var val 00:10:40.841 16:03:10 -- accel/accel.sh@20 -- # val='512 bytes' 00:10:40.841 16:03:10 -- accel/accel.sh@21 -- # case "$var" in 00:10:40.841 16:03:10 -- accel/accel.sh@19 -- # IFS=: 00:10:40.841 16:03:10 -- accel/accel.sh@19 -- # read -r var val 00:10:40.841 16:03:10 -- accel/accel.sh@20 -- # val='8 bytes' 00:10:40.841 16:03:10 -- accel/accel.sh@21 -- # case "$var" in 00:10:40.841 16:03:10 -- accel/accel.sh@19 -- # IFS=: 00:10:40.841 16:03:10 -- accel/accel.sh@19 -- # read -r var val 00:10:40.841 16:03:10 -- accel/accel.sh@20 -- # val= 00:10:40.841 16:03:10 -- accel/accel.sh@21 -- # case "$var" in 00:10:40.841 16:03:10 -- accel/accel.sh@19 -- # IFS=: 00:10:40.841 16:03:10 -- accel/accel.sh@19 -- # read -r var val 00:10:40.841 16:03:10 -- accel/accel.sh@20 -- # val=software 00:10:40.841 16:03:10 -- accel/accel.sh@21 -- # case "$var" in 00:10:40.841 16:03:10 -- accel/accel.sh@22 -- # accel_module=software 00:10:40.841 16:03:10 -- accel/accel.sh@19 -- # IFS=: 00:10:40.841 16:03:10 -- accel/accel.sh@19 -- # read -r var val 00:10:40.841 16:03:10 -- accel/accel.sh@20 -- # val=32 00:10:40.841 16:03:10 -- accel/accel.sh@21 -- # case "$var" in 00:10:40.841 16:03:10 -- accel/accel.sh@19 -- # IFS=: 00:10:40.841 16:03:10 -- accel/accel.sh@19 -- # read -r var val 00:10:40.841 16:03:10 -- accel/accel.sh@20 -- # val=32 00:10:40.841 16:03:10 -- accel/accel.sh@21 -- # case "$var" in 00:10:40.841 16:03:10 -- accel/accel.sh@19 -- # IFS=: 00:10:40.841 16:03:10 -- accel/accel.sh@19 -- # read -r var val 00:10:40.841 16:03:10 -- accel/accel.sh@20 -- # val=1 00:10:40.841 16:03:10 -- accel/accel.sh@21 -- # case "$var" in 00:10:40.841 16:03:10 -- accel/accel.sh@19 -- # IFS=: 00:10:40.841 16:03:10 -- accel/accel.sh@19 -- # read -r var val 00:10:40.841 16:03:10 -- accel/accel.sh@20 -- # val='1 seconds' 00:10:40.841 16:03:10 -- accel/accel.sh@21 -- # case "$var" in 00:10:40.841 16:03:10 -- accel/accel.sh@19 -- # IFS=: 00:10:40.841 16:03:10 -- accel/accel.sh@19 -- # read -r var val 00:10:40.841 16:03:10 -- accel/accel.sh@20 -- # val=No 00:10:40.841 16:03:10 -- accel/accel.sh@21 -- # case "$var" in 00:10:40.841 16:03:10 -- accel/accel.sh@19 -- # IFS=: 00:10:40.841 16:03:10 -- accel/accel.sh@19 -- # read -r var val 00:10:40.841 16:03:10 -- accel/accel.sh@20 -- # val= 00:10:40.841 16:03:10 -- accel/accel.sh@21 -- # case "$var" in 00:10:40.841 16:03:10 -- accel/accel.sh@19 -- # IFS=: 00:10:40.841 16:03:10 -- accel/accel.sh@19 -- # read -r var val 00:10:40.841 16:03:10 -- accel/accel.sh@20 -- # val= 00:10:40.841 16:03:10 -- accel/accel.sh@21 -- # case "$var" in 00:10:40.841 16:03:10 -- accel/accel.sh@19 -- # IFS=: 00:10:40.841 16:03:10 -- accel/accel.sh@19 -- # read -r var val 00:10:42.214 16:03:11 -- accel/accel.sh@20 -- # val= 00:10:42.214 16:03:11 -- accel/accel.sh@21 -- # case "$var" in 00:10:42.214 16:03:11 -- accel/accel.sh@19 -- # IFS=: 00:10:42.214 16:03:11 -- accel/accel.sh@19 -- # read -r var val 00:10:42.214 16:03:11 -- accel/accel.sh@20 -- # val= 00:10:42.215 16:03:11 -- accel/accel.sh@21 -- # case "$var" in 00:10:42.215 16:03:11 -- accel/accel.sh@19 -- # IFS=: 00:10:42.215 16:03:11 -- accel/accel.sh@19 -- # read -r var val 00:10:42.215 16:03:11 -- accel/accel.sh@20 -- # val= 00:10:42.215 16:03:11 -- accel/accel.sh@21 -- # case "$var" in 00:10:42.215 16:03:11 -- accel/accel.sh@19 -- # IFS=: 00:10:42.215 16:03:11 -- accel/accel.sh@19 -- # read -r var val 00:10:42.215 16:03:11 -- accel/accel.sh@20 -- # val= 00:10:42.215 16:03:11 -- accel/accel.sh@21 -- # case "$var" in 00:10:42.215 16:03:11 -- accel/accel.sh@19 -- # IFS=: 00:10:42.215 16:03:11 -- accel/accel.sh@19 -- # read -r var val 00:10:42.215 16:03:11 -- accel/accel.sh@20 -- # val= 00:10:42.215 16:03:11 -- accel/accel.sh@21 -- # case "$var" in 00:10:42.215 16:03:11 -- accel/accel.sh@19 -- # IFS=: 00:10:42.215 16:03:11 -- accel/accel.sh@19 -- # read -r var val 00:10:42.215 16:03:11 -- accel/accel.sh@20 -- # val= 00:10:42.215 16:03:11 -- accel/accel.sh@21 -- # case "$var" in 00:10:42.215 16:03:11 -- accel/accel.sh@19 -- # IFS=: 00:10:42.215 16:03:11 -- accel/accel.sh@19 -- # read -r var val 00:10:42.215 16:03:11 -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:42.215 16:03:11 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:10:42.215 16:03:11 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:42.215 00:10:42.215 real 0m1.427s 00:10:42.215 user 0m1.207s 00:10:42.215 sys 0m0.121s 00:10:42.215 16:03:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:42.215 16:03:11 -- common/autotest_common.sh@10 -- # set +x 00:10:42.215 ************************************ 00:10:42.215 END TEST accel_dif_generate 00:10:42.215 ************************************ 00:10:42.215 16:03:11 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:10:42.215 16:03:11 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:10:42.215 16:03:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:42.215 16:03:11 -- common/autotest_common.sh@10 -- # set +x 00:10:42.215 ************************************ 00:10:42.215 START TEST accel_dif_generate_copy 00:10:42.215 ************************************ 00:10:42.215 16:03:11 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:10:42.215 16:03:11 -- accel/accel.sh@16 -- # local accel_opc 00:10:42.215 16:03:11 -- accel/accel.sh@17 -- # local accel_module 00:10:42.215 16:03:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:10:42.215 16:03:11 -- accel/accel.sh@19 -- # IFS=: 00:10:42.215 16:03:11 -- accel/accel.sh@19 -- # read -r var val 00:10:42.215 16:03:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:10:42.215 16:03:11 -- accel/accel.sh@12 -- # build_accel_config 00:10:42.215 16:03:11 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:42.215 16:03:11 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:42.215 16:03:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:42.215 16:03:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:42.215 16:03:11 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:42.215 16:03:11 -- accel/accel.sh@40 -- # local IFS=, 00:10:42.215 16:03:11 -- accel/accel.sh@41 -- # jq -r . 00:10:42.215 [2024-04-15 16:03:12.001917] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:10:42.215 [2024-04-15 16:03:12.002090] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74111 ] 00:10:42.215 [2024-04-15 16:03:12.166280] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.473 [2024-04-15 16:03:12.231901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.473 [2024-04-15 16:03:12.232760] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:10:42.473 16:03:12 -- accel/accel.sh@20 -- # val= 00:10:42.473 16:03:12 -- accel/accel.sh@21 -- # case "$var" in 00:10:42.473 16:03:12 -- accel/accel.sh@19 -- # IFS=: 00:10:42.473 16:03:12 -- accel/accel.sh@19 -- # read -r var val 00:10:42.473 16:03:12 -- accel/accel.sh@20 -- # val= 00:10:42.473 16:03:12 -- accel/accel.sh@21 -- # case "$var" in 00:10:42.473 16:03:12 -- accel/accel.sh@19 -- # IFS=: 00:10:42.473 16:03:12 -- accel/accel.sh@19 -- # read -r var val 00:10:42.473 16:03:12 -- accel/accel.sh@20 -- # val=0x1 00:10:42.473 16:03:12 -- accel/accel.sh@21 -- # case "$var" in 00:10:42.473 16:03:12 -- accel/accel.sh@19 -- # IFS=: 00:10:42.473 16:03:12 -- accel/accel.sh@19 -- # read -r var val 00:10:42.473 16:03:12 -- accel/accel.sh@20 -- # val= 00:10:42.473 16:03:12 -- accel/accel.sh@21 -- # case "$var" in 00:10:42.473 16:03:12 -- accel/accel.sh@19 -- # IFS=: 00:10:42.473 16:03:12 -- accel/accel.sh@19 -- # read -r var val 00:10:42.473 16:03:12 -- accel/accel.sh@20 -- # val= 00:10:42.473 16:03:12 -- accel/accel.sh@21 -- # case "$var" in 00:10:42.473 16:03:12 -- accel/accel.sh@19 -- # IFS=: 00:10:42.473 16:03:12 -- accel/accel.sh@19 -- # read -r var val 00:10:42.473 16:03:12 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:10:42.473 16:03:12 -- accel/accel.sh@21 -- # case "$var" in 00:10:42.473 16:03:12 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:10:42.473 16:03:12 -- accel/accel.sh@19 -- # IFS=: 00:10:42.473 16:03:12 -- accel/accel.sh@19 -- # read -r var val 00:10:42.473 16:03:12 -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:42.473 16:03:12 -- accel/accel.sh@21 -- # case "$var" in 00:10:42.473 16:03:12 -- accel/accel.sh@19 -- # IFS=: 00:10:42.473 16:03:12 -- accel/accel.sh@19 -- # read -r var val 00:10:42.473 16:03:12 -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:42.473 16:03:12 -- accel/accel.sh@21 -- # case "$var" in 00:10:42.473 16:03:12 -- accel/accel.sh@19 -- # IFS=: 00:10:42.473 16:03:12 -- accel/accel.sh@19 -- # read -r var val 00:10:42.473 16:03:12 -- accel/accel.sh@20 -- # val= 00:10:42.473 16:03:12 -- accel/accel.sh@21 -- # case "$var" in 00:10:42.473 16:03:12 -- accel/accel.sh@19 -- # IFS=: 00:10:42.473 16:03:12 -- accel/accel.sh@19 -- # read -r var val 00:10:42.473 16:03:12 -- accel/accel.sh@20 -- # val=software 00:10:42.473 16:03:12 -- accel/accel.sh@21 -- # case "$var" in 00:10:42.473 16:03:12 -- accel/accel.sh@22 -- # accel_module=software 00:10:42.473 16:03:12 -- accel/accel.sh@19 -- # IFS=: 00:10:42.473 16:03:12 -- accel/accel.sh@19 -- # read -r var val 00:10:42.473 16:03:12 -- accel/accel.sh@20 -- # val=32 00:10:42.473 16:03:12 -- accel/accel.sh@21 -- # case "$var" in 00:10:42.473 16:03:12 -- accel/accel.sh@19 -- # IFS=: 00:10:42.473 16:03:12 -- accel/accel.sh@19 -- # read -r var val 00:10:42.473 16:03:12 -- accel/accel.sh@20 -- # val=32 00:10:42.473 16:03:12 -- accel/accel.sh@21 -- # case "$var" in 00:10:42.473 16:03:12 -- accel/accel.sh@19 -- # IFS=: 00:10:42.473 16:03:12 -- accel/accel.sh@19 -- # read -r var val 00:10:42.473 16:03:12 -- accel/accel.sh@20 -- # val=1 00:10:42.473 16:03:12 -- accel/accel.sh@21 -- # case "$var" in 00:10:42.473 16:03:12 -- accel/accel.sh@19 -- # IFS=: 00:10:42.473 16:03:12 -- accel/accel.sh@19 -- # read -r var val 00:10:42.473 16:03:12 -- accel/accel.sh@20 -- # val='1 seconds' 00:10:42.473 16:03:12 -- accel/accel.sh@21 -- # case "$var" in 00:10:42.473 16:03:12 -- accel/accel.sh@19 -- # IFS=: 00:10:42.473 16:03:12 -- accel/accel.sh@19 -- # read -r var val 00:10:42.473 16:03:12 -- accel/accel.sh@20 -- # val=No 00:10:42.473 16:03:12 -- accel/accel.sh@21 -- # case "$var" in 00:10:42.473 16:03:12 -- accel/accel.sh@19 -- # IFS=: 00:10:42.473 16:03:12 -- accel/accel.sh@19 -- # read -r var val 00:10:42.473 16:03:12 -- accel/accel.sh@20 -- # val= 00:10:42.473 16:03:12 -- accel/accel.sh@21 -- # case "$var" in 00:10:42.473 16:03:12 -- accel/accel.sh@19 -- # IFS=: 00:10:42.473 16:03:12 -- accel/accel.sh@19 -- # read -r var val 00:10:42.473 16:03:12 -- accel/accel.sh@20 -- # val= 00:10:42.473 16:03:12 -- accel/accel.sh@21 -- # case "$var" in 00:10:42.473 16:03:12 -- accel/accel.sh@19 -- # IFS=: 00:10:42.473 16:03:12 -- accel/accel.sh@19 -- # read -r var val 00:10:43.850 16:03:13 -- accel/accel.sh@20 -- # val= 00:10:43.850 16:03:13 -- accel/accel.sh@21 -- # case "$var" in 00:10:43.850 16:03:13 -- accel/accel.sh@19 -- # IFS=: 00:10:43.850 16:03:13 -- accel/accel.sh@19 -- # read -r var val 00:10:43.850 16:03:13 -- accel/accel.sh@20 -- # val= 00:10:43.850 16:03:13 -- accel/accel.sh@21 -- # case "$var" in 00:10:43.850 16:03:13 -- accel/accel.sh@19 -- # IFS=: 00:10:43.850 16:03:13 -- accel/accel.sh@19 -- # read -r var val 00:10:43.850 16:03:13 -- accel/accel.sh@20 -- # val= 00:10:43.850 16:03:13 -- accel/accel.sh@21 -- # case "$var" in 00:10:43.850 16:03:13 -- accel/accel.sh@19 -- # IFS=: 00:10:43.850 16:03:13 -- accel/accel.sh@19 -- # read -r var val 00:10:43.850 16:03:13 -- accel/accel.sh@20 -- # val= 00:10:43.850 16:03:13 -- accel/accel.sh@21 -- # case "$var" in 00:10:43.850 16:03:13 -- accel/accel.sh@19 -- # IFS=: 00:10:43.850 16:03:13 -- accel/accel.sh@19 -- # read -r var val 00:10:43.850 16:03:13 -- accel/accel.sh@20 -- # val= 00:10:43.850 16:03:13 -- accel/accel.sh@21 -- # case "$var" in 00:10:43.850 16:03:13 -- accel/accel.sh@19 -- # IFS=: 00:10:43.850 16:03:13 -- accel/accel.sh@19 -- # read -r var val 00:10:43.850 16:03:13 -- accel/accel.sh@20 -- # val= 00:10:43.850 16:03:13 -- accel/accel.sh@21 -- # case "$var" in 00:10:43.850 16:03:13 -- accel/accel.sh@19 -- # IFS=: 00:10:43.850 16:03:13 -- accel/accel.sh@19 -- # read -r var val 00:10:43.850 16:03:13 -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:43.850 16:03:13 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:10:43.850 16:03:13 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:43.850 00:10:43.850 real 0m1.441s 00:10:43.850 user 0m1.243s 00:10:43.850 sys 0m0.103s 00:10:43.850 16:03:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:43.850 16:03:13 -- common/autotest_common.sh@10 -- # set +x 00:10:43.850 ************************************ 00:10:43.850 END TEST accel_dif_generate_copy 00:10:43.850 ************************************ 00:10:43.850 16:03:13 -- accel/accel.sh@115 -- # [[ y == y ]] 00:10:43.850 16:03:13 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:43.850 16:03:13 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:10:43.850 16:03:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:43.850 16:03:13 -- common/autotest_common.sh@10 -- # set +x 00:10:43.850 ************************************ 00:10:43.850 START TEST accel_comp 00:10:43.850 ************************************ 00:10:43.850 16:03:13 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:43.850 16:03:13 -- accel/accel.sh@16 -- # local accel_opc 00:10:43.850 16:03:13 -- accel/accel.sh@17 -- # local accel_module 00:10:43.850 16:03:13 -- accel/accel.sh@19 -- # IFS=: 00:10:43.850 16:03:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:43.850 16:03:13 -- accel/accel.sh@19 -- # read -r var val 00:10:43.850 16:03:13 -- accel/accel.sh@12 -- # build_accel_config 00:10:43.850 16:03:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:43.850 16:03:13 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:43.850 16:03:13 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:43.850 16:03:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:43.850 16:03:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:43.850 16:03:13 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:43.850 16:03:13 -- accel/accel.sh@40 -- # local IFS=, 00:10:43.850 16:03:13 -- accel/accel.sh@41 -- # jq -r . 00:10:43.850 [2024-04-15 16:03:13.574420] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:10:43.850 [2024-04-15 16:03:13.574769] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74144 ] 00:10:43.850 [2024-04-15 16:03:13.717342] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.850 [2024-04-15 16:03:13.772785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.850 [2024-04-15 16:03:13.773750] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:10:44.132 16:03:13 -- accel/accel.sh@20 -- # val= 00:10:44.132 16:03:13 -- accel/accel.sh@21 -- # case "$var" in 00:10:44.132 16:03:13 -- accel/accel.sh@19 -- # IFS=: 00:10:44.133 16:03:13 -- accel/accel.sh@19 -- # read -r var val 00:10:44.133 16:03:13 -- accel/accel.sh@20 -- # val= 00:10:44.133 16:03:13 -- accel/accel.sh@21 -- # case "$var" in 00:10:44.133 16:03:13 -- accel/accel.sh@19 -- # IFS=: 00:10:44.133 16:03:13 -- accel/accel.sh@19 -- # read -r var val 00:10:44.133 16:03:13 -- accel/accel.sh@20 -- # val= 00:10:44.133 16:03:13 -- accel/accel.sh@21 -- # case "$var" in 00:10:44.133 16:03:13 -- accel/accel.sh@19 -- # IFS=: 00:10:44.133 16:03:13 -- accel/accel.sh@19 -- # read -r var val 00:10:44.133 16:03:13 -- accel/accel.sh@20 -- # val=0x1 00:10:44.133 16:03:13 -- accel/accel.sh@21 -- # case "$var" in 00:10:44.133 16:03:13 -- accel/accel.sh@19 -- # IFS=: 00:10:44.133 16:03:13 -- accel/accel.sh@19 -- # read -r var val 00:10:44.133 16:03:13 -- accel/accel.sh@20 -- # val= 00:10:44.133 16:03:13 -- accel/accel.sh@21 -- # case "$var" in 00:10:44.133 16:03:13 -- accel/accel.sh@19 -- # IFS=: 00:10:44.133 16:03:13 -- accel/accel.sh@19 -- # read -r var val 00:10:44.133 16:03:13 -- accel/accel.sh@20 -- # val= 00:10:44.133 16:03:13 -- accel/accel.sh@21 -- # case "$var" in 00:10:44.133 16:03:13 -- accel/accel.sh@19 -- # IFS=: 00:10:44.133 16:03:13 -- accel/accel.sh@19 -- # read -r var val 00:10:44.133 16:03:13 -- accel/accel.sh@20 -- # val=compress 00:10:44.133 16:03:13 -- accel/accel.sh@21 -- # case "$var" in 00:10:44.133 16:03:13 -- accel/accel.sh@23 -- # accel_opc=compress 00:10:44.133 16:03:13 -- accel/accel.sh@19 -- # IFS=: 00:10:44.133 16:03:13 -- accel/accel.sh@19 -- # read -r var val 00:10:44.133 16:03:13 -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:44.133 16:03:13 -- accel/accel.sh@21 -- # case "$var" in 00:10:44.133 16:03:13 -- accel/accel.sh@19 -- # IFS=: 00:10:44.133 16:03:13 -- accel/accel.sh@19 -- # read -r var val 00:10:44.133 16:03:13 -- accel/accel.sh@20 -- # val= 00:10:44.133 16:03:13 -- accel/accel.sh@21 -- # case "$var" in 00:10:44.133 16:03:13 -- accel/accel.sh@19 -- # IFS=: 00:10:44.133 16:03:13 -- accel/accel.sh@19 -- # read -r var val 00:10:44.133 16:03:13 -- accel/accel.sh@20 -- # val=software 00:10:44.133 16:03:13 -- accel/accel.sh@21 -- # case "$var" in 00:10:44.133 16:03:13 -- accel/accel.sh@22 -- # accel_module=software 00:10:44.133 16:03:13 -- accel/accel.sh@19 -- # IFS=: 00:10:44.133 16:03:13 -- accel/accel.sh@19 -- # read -r var val 00:10:44.133 16:03:13 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:44.133 16:03:13 -- accel/accel.sh@21 -- # case "$var" in 00:10:44.133 16:03:13 -- accel/accel.sh@19 -- # IFS=: 00:10:44.133 16:03:13 -- accel/accel.sh@19 -- # read -r var val 00:10:44.133 16:03:13 -- accel/accel.sh@20 -- # val=32 00:10:44.133 16:03:13 -- accel/accel.sh@21 -- # case "$var" in 00:10:44.133 16:03:13 -- accel/accel.sh@19 -- # IFS=: 00:10:44.133 16:03:13 -- accel/accel.sh@19 -- # read -r var val 00:10:44.133 16:03:13 -- accel/accel.sh@20 -- # val=32 00:10:44.133 16:03:13 -- accel/accel.sh@21 -- # case "$var" in 00:10:44.133 16:03:13 -- accel/accel.sh@19 -- # IFS=: 00:10:44.133 16:03:13 -- accel/accel.sh@19 -- # read -r var val 00:10:44.133 16:03:13 -- accel/accel.sh@20 -- # val=1 00:10:44.133 16:03:13 -- accel/accel.sh@21 -- # case "$var" in 00:10:44.133 16:03:13 -- accel/accel.sh@19 -- # IFS=: 00:10:44.133 16:03:13 -- accel/accel.sh@19 -- # read -r var val 00:10:44.133 16:03:13 -- accel/accel.sh@20 -- # val='1 seconds' 00:10:44.133 16:03:13 -- accel/accel.sh@21 -- # case "$var" in 00:10:44.133 16:03:13 -- accel/accel.sh@19 -- # IFS=: 00:10:44.133 16:03:13 -- accel/accel.sh@19 -- # read -r var val 00:10:44.133 16:03:13 -- accel/accel.sh@20 -- # val=No 00:10:44.133 16:03:13 -- accel/accel.sh@21 -- # case "$var" in 00:10:44.133 16:03:13 -- accel/accel.sh@19 -- # IFS=: 00:10:44.133 16:03:13 -- accel/accel.sh@19 -- # read -r var val 00:10:44.133 16:03:13 -- accel/accel.sh@20 -- # val= 00:10:44.133 16:03:13 -- accel/accel.sh@21 -- # case "$var" in 00:10:44.133 16:03:13 -- accel/accel.sh@19 -- # IFS=: 00:10:44.133 16:03:13 -- accel/accel.sh@19 -- # read -r var val 00:10:44.133 16:03:13 -- accel/accel.sh@20 -- # val= 00:10:44.133 16:03:13 -- accel/accel.sh@21 -- # case "$var" in 00:10:44.133 16:03:13 -- accel/accel.sh@19 -- # IFS=: 00:10:44.133 16:03:13 -- accel/accel.sh@19 -- # read -r var val 00:10:45.111 16:03:14 -- accel/accel.sh@20 -- # val= 00:10:45.111 16:03:14 -- accel/accel.sh@21 -- # case "$var" in 00:10:45.111 16:03:14 -- accel/accel.sh@19 -- # IFS=: 00:10:45.111 16:03:14 -- accel/accel.sh@19 -- # read -r var val 00:10:45.111 16:03:14 -- accel/accel.sh@20 -- # val= 00:10:45.111 16:03:14 -- accel/accel.sh@21 -- # case "$var" in 00:10:45.111 16:03:14 -- accel/accel.sh@19 -- # IFS=: 00:10:45.111 16:03:14 -- accel/accel.sh@19 -- # read -r var val 00:10:45.111 16:03:14 -- accel/accel.sh@20 -- # val= 00:10:45.111 16:03:14 -- accel/accel.sh@21 -- # case "$var" in 00:10:45.111 16:03:14 -- accel/accel.sh@19 -- # IFS=: 00:10:45.111 16:03:14 -- accel/accel.sh@19 -- # read -r var val 00:10:45.111 16:03:14 -- accel/accel.sh@20 -- # val= 00:10:45.111 16:03:14 -- accel/accel.sh@21 -- # case "$var" in 00:10:45.111 16:03:14 -- accel/accel.sh@19 -- # IFS=: 00:10:45.111 16:03:14 -- accel/accel.sh@19 -- # read -r var val 00:10:45.111 16:03:14 -- accel/accel.sh@20 -- # val= 00:10:45.111 16:03:14 -- accel/accel.sh@21 -- # case "$var" in 00:10:45.111 16:03:14 -- accel/accel.sh@19 -- # IFS=: 00:10:45.111 16:03:14 -- accel/accel.sh@19 -- # read -r var val 00:10:45.111 16:03:14 -- accel/accel.sh@20 -- # val= 00:10:45.111 16:03:14 -- accel/accel.sh@21 -- # case "$var" in 00:10:45.111 16:03:14 -- accel/accel.sh@19 -- # IFS=: 00:10:45.111 16:03:14 -- accel/accel.sh@19 -- # read -r var val 00:10:45.111 16:03:14 -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:45.111 ************************************ 00:10:45.111 END TEST accel_comp 00:10:45.111 ************************************ 00:10:45.111 16:03:14 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:10:45.111 16:03:14 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:45.111 00:10:45.111 real 0m1.411s 00:10:45.111 user 0m1.212s 00:10:45.111 sys 0m0.103s 00:10:45.111 16:03:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:45.111 16:03:14 -- common/autotest_common.sh@10 -- # set +x 00:10:45.111 16:03:15 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:45.111 16:03:15 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:10:45.111 16:03:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:45.111 16:03:15 -- common/autotest_common.sh@10 -- # set +x 00:10:45.369 ************************************ 00:10:45.369 START TEST accel_decomp 00:10:45.369 ************************************ 00:10:45.369 16:03:15 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:45.369 16:03:15 -- accel/accel.sh@16 -- # local accel_opc 00:10:45.369 16:03:15 -- accel/accel.sh@17 -- # local accel_module 00:10:45.369 16:03:15 -- accel/accel.sh@19 -- # IFS=: 00:10:45.369 16:03:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:45.369 16:03:15 -- accel/accel.sh@19 -- # read -r var val 00:10:45.369 16:03:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:45.369 16:03:15 -- accel/accel.sh@12 -- # build_accel_config 00:10:45.369 16:03:15 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:45.369 16:03:15 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:45.369 16:03:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:45.369 16:03:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:45.369 16:03:15 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:45.369 16:03:15 -- accel/accel.sh@40 -- # local IFS=, 00:10:45.369 16:03:15 -- accel/accel.sh@41 -- # jq -r . 00:10:45.369 [2024-04-15 16:03:15.127356] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:10:45.369 [2024-04-15 16:03:15.127677] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74187 ] 00:10:45.369 [2024-04-15 16:03:15.279436] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:45.627 [2024-04-15 16:03:15.339265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.627 [2024-04-15 16:03:15.340252] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:10:45.627 16:03:15 -- accel/accel.sh@20 -- # val= 00:10:45.627 16:03:15 -- accel/accel.sh@21 -- # case "$var" in 00:10:45.627 16:03:15 -- accel/accel.sh@19 -- # IFS=: 00:10:45.627 16:03:15 -- accel/accel.sh@19 -- # read -r var val 00:10:45.627 16:03:15 -- accel/accel.sh@20 -- # val= 00:10:45.627 16:03:15 -- accel/accel.sh@21 -- # case "$var" in 00:10:45.627 16:03:15 -- accel/accel.sh@19 -- # IFS=: 00:10:45.627 16:03:15 -- accel/accel.sh@19 -- # read -r var val 00:10:45.627 16:03:15 -- accel/accel.sh@20 -- # val= 00:10:45.627 16:03:15 -- accel/accel.sh@21 -- # case "$var" in 00:10:45.627 16:03:15 -- accel/accel.sh@19 -- # IFS=: 00:10:45.627 16:03:15 -- accel/accel.sh@19 -- # read -r var val 00:10:45.627 16:03:15 -- accel/accel.sh@20 -- # val=0x1 00:10:45.627 16:03:15 -- accel/accel.sh@21 -- # case "$var" in 00:10:45.627 16:03:15 -- accel/accel.sh@19 -- # IFS=: 00:10:45.627 16:03:15 -- accel/accel.sh@19 -- # read -r var val 00:10:45.627 16:03:15 -- accel/accel.sh@20 -- # val= 00:10:45.627 16:03:15 -- accel/accel.sh@21 -- # case "$var" in 00:10:45.627 16:03:15 -- accel/accel.sh@19 -- # IFS=: 00:10:45.627 16:03:15 -- accel/accel.sh@19 -- # read -r var val 00:10:45.627 16:03:15 -- accel/accel.sh@20 -- # val= 00:10:45.627 16:03:15 -- accel/accel.sh@21 -- # case "$var" in 00:10:45.627 16:03:15 -- accel/accel.sh@19 -- # IFS=: 00:10:45.627 16:03:15 -- accel/accel.sh@19 -- # read -r var val 00:10:45.627 16:03:15 -- accel/accel.sh@20 -- # val=decompress 00:10:45.627 16:03:15 -- accel/accel.sh@21 -- # case "$var" in 00:10:45.627 16:03:15 -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:45.627 16:03:15 -- accel/accel.sh@19 -- # IFS=: 00:10:45.627 16:03:15 -- accel/accel.sh@19 -- # read -r var val 00:10:45.628 16:03:15 -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:45.628 16:03:15 -- accel/accel.sh@21 -- # case "$var" in 00:10:45.628 16:03:15 -- accel/accel.sh@19 -- # IFS=: 00:10:45.628 16:03:15 -- accel/accel.sh@19 -- # read -r var val 00:10:45.628 16:03:15 -- accel/accel.sh@20 -- # val= 00:10:45.628 16:03:15 -- accel/accel.sh@21 -- # case "$var" in 00:10:45.628 16:03:15 -- accel/accel.sh@19 -- # IFS=: 00:10:45.628 16:03:15 -- accel/accel.sh@19 -- # read -r var val 00:10:45.628 16:03:15 -- accel/accel.sh@20 -- # val=software 00:10:45.628 16:03:15 -- accel/accel.sh@21 -- # case "$var" in 00:10:45.628 16:03:15 -- accel/accel.sh@22 -- # accel_module=software 00:10:45.628 16:03:15 -- accel/accel.sh@19 -- # IFS=: 00:10:45.628 16:03:15 -- accel/accel.sh@19 -- # read -r var val 00:10:45.628 16:03:15 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:45.628 16:03:15 -- accel/accel.sh@21 -- # case "$var" in 00:10:45.628 16:03:15 -- accel/accel.sh@19 -- # IFS=: 00:10:45.628 16:03:15 -- accel/accel.sh@19 -- # read -r var val 00:10:45.628 16:03:15 -- accel/accel.sh@20 -- # val=32 00:10:45.628 16:03:15 -- accel/accel.sh@21 -- # case "$var" in 00:10:45.628 16:03:15 -- accel/accel.sh@19 -- # IFS=: 00:10:45.628 16:03:15 -- accel/accel.sh@19 -- # read -r var val 00:10:45.628 16:03:15 -- accel/accel.sh@20 -- # val=32 00:10:45.628 16:03:15 -- accel/accel.sh@21 -- # case "$var" in 00:10:45.628 16:03:15 -- accel/accel.sh@19 -- # IFS=: 00:10:45.628 16:03:15 -- accel/accel.sh@19 -- # read -r var val 00:10:45.628 16:03:15 -- accel/accel.sh@20 -- # val=1 00:10:45.628 16:03:15 -- accel/accel.sh@21 -- # case "$var" in 00:10:45.628 16:03:15 -- accel/accel.sh@19 -- # IFS=: 00:10:45.628 16:03:15 -- accel/accel.sh@19 -- # read -r var val 00:10:45.628 16:03:15 -- accel/accel.sh@20 -- # val='1 seconds' 00:10:45.628 16:03:15 -- accel/accel.sh@21 -- # case "$var" in 00:10:45.628 16:03:15 -- accel/accel.sh@19 -- # IFS=: 00:10:45.628 16:03:15 -- accel/accel.sh@19 -- # read -r var val 00:10:45.628 16:03:15 -- accel/accel.sh@20 -- # val=Yes 00:10:45.628 16:03:15 -- accel/accel.sh@21 -- # case "$var" in 00:10:45.628 16:03:15 -- accel/accel.sh@19 -- # IFS=: 00:10:45.628 16:03:15 -- accel/accel.sh@19 -- # read -r var val 00:10:45.628 16:03:15 -- accel/accel.sh@20 -- # val= 00:10:45.628 16:03:15 -- accel/accel.sh@21 -- # case "$var" in 00:10:45.628 16:03:15 -- accel/accel.sh@19 -- # IFS=: 00:10:45.628 16:03:15 -- accel/accel.sh@19 -- # read -r var val 00:10:45.628 16:03:15 -- accel/accel.sh@20 -- # val= 00:10:45.628 16:03:15 -- accel/accel.sh@21 -- # case "$var" in 00:10:45.628 16:03:15 -- accel/accel.sh@19 -- # IFS=: 00:10:45.628 16:03:15 -- accel/accel.sh@19 -- # read -r var val 00:10:46.562 16:03:16 -- accel/accel.sh@20 -- # val= 00:10:46.562 16:03:16 -- accel/accel.sh@21 -- # case "$var" in 00:10:46.562 16:03:16 -- accel/accel.sh@19 -- # IFS=: 00:10:46.562 16:03:16 -- accel/accel.sh@19 -- # read -r var val 00:10:46.562 16:03:16 -- accel/accel.sh@20 -- # val= 00:10:46.562 16:03:16 -- accel/accel.sh@21 -- # case "$var" in 00:10:46.562 16:03:16 -- accel/accel.sh@19 -- # IFS=: 00:10:46.562 16:03:16 -- accel/accel.sh@19 -- # read -r var val 00:10:46.562 16:03:16 -- accel/accel.sh@20 -- # val= 00:10:46.821 16:03:16 -- accel/accel.sh@21 -- # case "$var" in 00:10:46.821 16:03:16 -- accel/accel.sh@19 -- # IFS=: 00:10:46.821 16:03:16 -- accel/accel.sh@19 -- # read -r var val 00:10:46.821 16:03:16 -- accel/accel.sh@20 -- # val= 00:10:46.821 16:03:16 -- accel/accel.sh@21 -- # case "$var" in 00:10:46.821 16:03:16 -- accel/accel.sh@19 -- # IFS=: 00:10:46.821 16:03:16 -- accel/accel.sh@19 -- # read -r var val 00:10:46.821 16:03:16 -- accel/accel.sh@20 -- # val= 00:10:46.821 16:03:16 -- accel/accel.sh@21 -- # case "$var" in 00:10:46.821 16:03:16 -- accel/accel.sh@19 -- # IFS=: 00:10:46.821 16:03:16 -- accel/accel.sh@19 -- # read -r var val 00:10:46.821 16:03:16 -- accel/accel.sh@20 -- # val= 00:10:46.821 16:03:16 -- accel/accel.sh@21 -- # case "$var" in 00:10:46.821 16:03:16 -- accel/accel.sh@19 -- # IFS=: 00:10:46.821 16:03:16 -- accel/accel.sh@19 -- # read -r var val 00:10:46.821 16:03:16 -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:46.821 16:03:16 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:46.821 16:03:16 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:46.821 00:10:46.821 real 0m1.433s 00:10:46.821 user 0m1.221s 00:10:46.821 sys 0m0.117s 00:10:46.821 16:03:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:46.821 ************************************ 00:10:46.821 END TEST accel_decomp 00:10:46.821 ************************************ 00:10:46.821 16:03:16 -- common/autotest_common.sh@10 -- # set +x 00:10:46.821 16:03:16 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:10:46.821 16:03:16 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:10:46.821 16:03:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:46.821 16:03:16 -- common/autotest_common.sh@10 -- # set +x 00:10:46.821 ************************************ 00:10:46.821 START TEST accel_decmop_full 00:10:46.821 ************************************ 00:10:46.821 16:03:16 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:10:46.821 16:03:16 -- accel/accel.sh@16 -- # local accel_opc 00:10:46.821 16:03:16 -- accel/accel.sh@17 -- # local accel_module 00:10:46.821 16:03:16 -- accel/accel.sh@19 -- # IFS=: 00:10:46.821 16:03:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:10:46.821 16:03:16 -- accel/accel.sh@19 -- # read -r var val 00:10:46.821 16:03:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:10:46.821 16:03:16 -- accel/accel.sh@12 -- # build_accel_config 00:10:46.821 16:03:16 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:46.821 16:03:16 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:46.821 16:03:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:46.821 16:03:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:46.821 16:03:16 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:46.821 16:03:16 -- accel/accel.sh@40 -- # local IFS=, 00:10:46.821 16:03:16 -- accel/accel.sh@41 -- # jq -r . 00:10:46.821 [2024-04-15 16:03:16.671296] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:10:46.821 [2024-04-15 16:03:16.671536] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74222 ] 00:10:47.080 [2024-04-15 16:03:16.814874] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.080 [2024-04-15 16:03:16.870087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.080 [2024-04-15 16:03:16.871219] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:10:47.080 16:03:16 -- accel/accel.sh@20 -- # val= 00:10:47.080 16:03:16 -- accel/accel.sh@21 -- # case "$var" in 00:10:47.080 16:03:16 -- accel/accel.sh@19 -- # IFS=: 00:10:47.080 16:03:16 -- accel/accel.sh@19 -- # read -r var val 00:10:47.080 16:03:16 -- accel/accel.sh@20 -- # val= 00:10:47.080 16:03:16 -- accel/accel.sh@21 -- # case "$var" in 00:10:47.080 16:03:16 -- accel/accel.sh@19 -- # IFS=: 00:10:47.080 16:03:16 -- accel/accel.sh@19 -- # read -r var val 00:10:47.080 16:03:16 -- accel/accel.sh@20 -- # val= 00:10:47.080 16:03:16 -- accel/accel.sh@21 -- # case "$var" in 00:10:47.080 16:03:16 -- accel/accel.sh@19 -- # IFS=: 00:10:47.080 16:03:16 -- accel/accel.sh@19 -- # read -r var val 00:10:47.080 16:03:16 -- accel/accel.sh@20 -- # val=0x1 00:10:47.080 16:03:16 -- accel/accel.sh@21 -- # case "$var" in 00:10:47.080 16:03:16 -- accel/accel.sh@19 -- # IFS=: 00:10:47.080 16:03:16 -- accel/accel.sh@19 -- # read -r var val 00:10:47.080 16:03:16 -- accel/accel.sh@20 -- # val= 00:10:47.080 16:03:16 -- accel/accel.sh@21 -- # case "$var" in 00:10:47.080 16:03:16 -- accel/accel.sh@19 -- # IFS=: 00:10:47.080 16:03:16 -- accel/accel.sh@19 -- # read -r var val 00:10:47.080 16:03:16 -- accel/accel.sh@20 -- # val= 00:10:47.080 16:03:16 -- accel/accel.sh@21 -- # case "$var" in 00:10:47.080 16:03:16 -- accel/accel.sh@19 -- # IFS=: 00:10:47.080 16:03:16 -- accel/accel.sh@19 -- # read -r var val 00:10:47.080 16:03:16 -- accel/accel.sh@20 -- # val=decompress 00:10:47.080 16:03:16 -- accel/accel.sh@21 -- # case "$var" in 00:10:47.080 16:03:16 -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:47.080 16:03:16 -- accel/accel.sh@19 -- # IFS=: 00:10:47.080 16:03:16 -- accel/accel.sh@19 -- # read -r var val 00:10:47.080 16:03:16 -- accel/accel.sh@20 -- # val='111250 bytes' 00:10:47.080 16:03:16 -- accel/accel.sh@21 -- # case "$var" in 00:10:47.080 16:03:16 -- accel/accel.sh@19 -- # IFS=: 00:10:47.080 16:03:16 -- accel/accel.sh@19 -- # read -r var val 00:10:47.080 16:03:16 -- accel/accel.sh@20 -- # val= 00:10:47.080 16:03:16 -- accel/accel.sh@21 -- # case "$var" in 00:10:47.080 16:03:16 -- accel/accel.sh@19 -- # IFS=: 00:10:47.080 16:03:16 -- accel/accel.sh@19 -- # read -r var val 00:10:47.080 16:03:16 -- accel/accel.sh@20 -- # val=software 00:10:47.080 16:03:16 -- accel/accel.sh@21 -- # case "$var" in 00:10:47.080 16:03:16 -- accel/accel.sh@22 -- # accel_module=software 00:10:47.080 16:03:16 -- accel/accel.sh@19 -- # IFS=: 00:10:47.080 16:03:16 -- accel/accel.sh@19 -- # read -r var val 00:10:47.080 16:03:16 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:47.080 16:03:16 -- accel/accel.sh@21 -- # case "$var" in 00:10:47.080 16:03:16 -- accel/accel.sh@19 -- # IFS=: 00:10:47.080 16:03:16 -- accel/accel.sh@19 -- # read -r var val 00:10:47.080 16:03:16 -- accel/accel.sh@20 -- # val=32 00:10:47.080 16:03:16 -- accel/accel.sh@21 -- # case "$var" in 00:10:47.080 16:03:16 -- accel/accel.sh@19 -- # IFS=: 00:10:47.080 16:03:16 -- accel/accel.sh@19 -- # read -r var val 00:10:47.080 16:03:16 -- accel/accel.sh@20 -- # val=32 00:10:47.080 16:03:16 -- accel/accel.sh@21 -- # case "$var" in 00:10:47.080 16:03:16 -- accel/accel.sh@19 -- # IFS=: 00:10:47.080 16:03:16 -- accel/accel.sh@19 -- # read -r var val 00:10:47.080 16:03:16 -- accel/accel.sh@20 -- # val=1 00:10:47.080 16:03:16 -- accel/accel.sh@21 -- # case "$var" in 00:10:47.080 16:03:16 -- accel/accel.sh@19 -- # IFS=: 00:10:47.080 16:03:16 -- accel/accel.sh@19 -- # read -r var val 00:10:47.080 16:03:16 -- accel/accel.sh@20 -- # val='1 seconds' 00:10:47.080 16:03:16 -- accel/accel.sh@21 -- # case "$var" in 00:10:47.080 16:03:16 -- accel/accel.sh@19 -- # IFS=: 00:10:47.080 16:03:16 -- accel/accel.sh@19 -- # read -r var val 00:10:47.080 16:03:16 -- accel/accel.sh@20 -- # val=Yes 00:10:47.080 16:03:16 -- accel/accel.sh@21 -- # case "$var" in 00:10:47.080 16:03:16 -- accel/accel.sh@19 -- # IFS=: 00:10:47.080 16:03:16 -- accel/accel.sh@19 -- # read -r var val 00:10:47.080 16:03:16 -- accel/accel.sh@20 -- # val= 00:10:47.080 16:03:16 -- accel/accel.sh@21 -- # case "$var" in 00:10:47.080 16:03:16 -- accel/accel.sh@19 -- # IFS=: 00:10:47.080 16:03:16 -- accel/accel.sh@19 -- # read -r var val 00:10:47.080 16:03:16 -- accel/accel.sh@20 -- # val= 00:10:47.080 16:03:16 -- accel/accel.sh@21 -- # case "$var" in 00:10:47.080 16:03:16 -- accel/accel.sh@19 -- # IFS=: 00:10:47.080 16:03:16 -- accel/accel.sh@19 -- # read -r var val 00:10:48.457 16:03:18 -- accel/accel.sh@20 -- # val= 00:10:48.457 16:03:18 -- accel/accel.sh@21 -- # case "$var" in 00:10:48.457 16:03:18 -- accel/accel.sh@19 -- # IFS=: 00:10:48.457 16:03:18 -- accel/accel.sh@19 -- # read -r var val 00:10:48.457 16:03:18 -- accel/accel.sh@20 -- # val= 00:10:48.457 16:03:18 -- accel/accel.sh@21 -- # case "$var" in 00:10:48.457 16:03:18 -- accel/accel.sh@19 -- # IFS=: 00:10:48.457 16:03:18 -- accel/accel.sh@19 -- # read -r var val 00:10:48.457 16:03:18 -- accel/accel.sh@20 -- # val= 00:10:48.457 16:03:18 -- accel/accel.sh@21 -- # case "$var" in 00:10:48.457 16:03:18 -- accel/accel.sh@19 -- # IFS=: 00:10:48.457 16:03:18 -- accel/accel.sh@19 -- # read -r var val 00:10:48.457 16:03:18 -- accel/accel.sh@20 -- # val= 00:10:48.457 16:03:18 -- accel/accel.sh@21 -- # case "$var" in 00:10:48.457 16:03:18 -- accel/accel.sh@19 -- # IFS=: 00:10:48.457 16:03:18 -- accel/accel.sh@19 -- # read -r var val 00:10:48.457 16:03:18 -- accel/accel.sh@20 -- # val= 00:10:48.457 16:03:18 -- accel/accel.sh@21 -- # case "$var" in 00:10:48.457 16:03:18 -- accel/accel.sh@19 -- # IFS=: 00:10:48.457 16:03:18 -- accel/accel.sh@19 -- # read -r var val 00:10:48.457 16:03:18 -- accel/accel.sh@20 -- # val= 00:10:48.457 16:03:18 -- accel/accel.sh@21 -- # case "$var" in 00:10:48.457 16:03:18 -- accel/accel.sh@19 -- # IFS=: 00:10:48.457 16:03:18 -- accel/accel.sh@19 -- # read -r var val 00:10:48.457 16:03:18 -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:48.457 16:03:18 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:48.457 16:03:18 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:48.457 00:10:48.457 real 0m1.426s 00:10:48.457 user 0m1.225s 00:10:48.457 sys 0m0.109s 00:10:48.457 16:03:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:48.457 16:03:18 -- common/autotest_common.sh@10 -- # set +x 00:10:48.457 ************************************ 00:10:48.457 END TEST accel_decmop_full 00:10:48.457 ************************************ 00:10:48.457 16:03:18 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:10:48.457 16:03:18 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:10:48.457 16:03:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:48.457 16:03:18 -- common/autotest_common.sh@10 -- # set +x 00:10:48.457 ************************************ 00:10:48.457 START TEST accel_decomp_mcore 00:10:48.457 ************************************ 00:10:48.457 16:03:18 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:10:48.457 16:03:18 -- accel/accel.sh@16 -- # local accel_opc 00:10:48.458 16:03:18 -- accel/accel.sh@17 -- # local accel_module 00:10:48.458 16:03:18 -- accel/accel.sh@19 -- # IFS=: 00:10:48.458 16:03:18 -- accel/accel.sh@19 -- # read -r var val 00:10:48.458 16:03:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:10:48.458 16:03:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:10:48.458 16:03:18 -- accel/accel.sh@12 -- # build_accel_config 00:10:48.458 16:03:18 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:48.458 16:03:18 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:48.458 16:03:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:48.458 16:03:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:48.458 16:03:18 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:48.458 16:03:18 -- accel/accel.sh@40 -- # local IFS=, 00:10:48.458 16:03:18 -- accel/accel.sh@41 -- # jq -r . 00:10:48.458 [2024-04-15 16:03:18.197786] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:10:48.458 [2024-04-15 16:03:18.198025] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74260 ] 00:10:48.458 [2024-04-15 16:03:18.329785] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:48.458 [2024-04-15 16:03:18.386734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:48.458 [2024-04-15 16:03:18.386860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:48.458 [2024-04-15 16:03:18.387023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.458 [2024-04-15 16:03:18.387024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:48.458 [2024-04-15 16:03:18.388832] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:10:48.716 16:03:18 -- accel/accel.sh@20 -- # val= 00:10:48.716 16:03:18 -- accel/accel.sh@21 -- # case "$var" in 00:10:48.716 16:03:18 -- accel/accel.sh@19 -- # IFS=: 00:10:48.716 16:03:18 -- accel/accel.sh@19 -- # read -r var val 00:10:48.716 16:03:18 -- accel/accel.sh@20 -- # val= 00:10:48.716 16:03:18 -- accel/accel.sh@21 -- # case "$var" in 00:10:48.716 16:03:18 -- accel/accel.sh@19 -- # IFS=: 00:10:48.716 16:03:18 -- accel/accel.sh@19 -- # read -r var val 00:10:48.716 16:03:18 -- accel/accel.sh@20 -- # val= 00:10:48.716 16:03:18 -- accel/accel.sh@21 -- # case "$var" in 00:10:48.716 16:03:18 -- accel/accel.sh@19 -- # IFS=: 00:10:48.716 16:03:18 -- accel/accel.sh@19 -- # read -r var val 00:10:48.716 16:03:18 -- accel/accel.sh@20 -- # val=0xf 00:10:48.716 16:03:18 -- accel/accel.sh@21 -- # case "$var" in 00:10:48.716 16:03:18 -- accel/accel.sh@19 -- # IFS=: 00:10:48.716 16:03:18 -- accel/accel.sh@19 -- # read -r var val 00:10:48.716 16:03:18 -- accel/accel.sh@20 -- # val= 00:10:48.716 16:03:18 -- accel/accel.sh@21 -- # case "$var" in 00:10:48.716 16:03:18 -- accel/accel.sh@19 -- # IFS=: 00:10:48.716 16:03:18 -- accel/accel.sh@19 -- # read -r var val 00:10:48.716 16:03:18 -- accel/accel.sh@20 -- # val= 00:10:48.716 16:03:18 -- accel/accel.sh@21 -- # case "$var" in 00:10:48.716 16:03:18 -- accel/accel.sh@19 -- # IFS=: 00:10:48.716 16:03:18 -- accel/accel.sh@19 -- # read -r var val 00:10:48.716 16:03:18 -- accel/accel.sh@20 -- # val=decompress 00:10:48.716 16:03:18 -- accel/accel.sh@21 -- # case "$var" in 00:10:48.716 16:03:18 -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:48.716 16:03:18 -- accel/accel.sh@19 -- # IFS=: 00:10:48.716 16:03:18 -- accel/accel.sh@19 -- # read -r var val 00:10:48.716 16:03:18 -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:48.716 16:03:18 -- accel/accel.sh@21 -- # case "$var" in 00:10:48.716 16:03:18 -- accel/accel.sh@19 -- # IFS=: 00:10:48.716 16:03:18 -- accel/accel.sh@19 -- # read -r var val 00:10:48.716 16:03:18 -- accel/accel.sh@20 -- # val= 00:10:48.716 16:03:18 -- accel/accel.sh@21 -- # case "$var" in 00:10:48.716 16:03:18 -- accel/accel.sh@19 -- # IFS=: 00:10:48.716 16:03:18 -- accel/accel.sh@19 -- # read -r var val 00:10:48.716 16:03:18 -- accel/accel.sh@20 -- # val=software 00:10:48.716 16:03:18 -- accel/accel.sh@21 -- # case "$var" in 00:10:48.716 16:03:18 -- accel/accel.sh@22 -- # accel_module=software 00:10:48.716 16:03:18 -- accel/accel.sh@19 -- # IFS=: 00:10:48.716 16:03:18 -- accel/accel.sh@19 -- # read -r var val 00:10:48.716 16:03:18 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:48.716 16:03:18 -- accel/accel.sh@21 -- # case "$var" in 00:10:48.716 16:03:18 -- accel/accel.sh@19 -- # IFS=: 00:10:48.716 16:03:18 -- accel/accel.sh@19 -- # read -r var val 00:10:48.716 16:03:18 -- accel/accel.sh@20 -- # val=32 00:10:48.716 16:03:18 -- accel/accel.sh@21 -- # case "$var" in 00:10:48.716 16:03:18 -- accel/accel.sh@19 -- # IFS=: 00:10:48.716 16:03:18 -- accel/accel.sh@19 -- # read -r var val 00:10:48.716 16:03:18 -- accel/accel.sh@20 -- # val=32 00:10:48.716 16:03:18 -- accel/accel.sh@21 -- # case "$var" in 00:10:48.716 16:03:18 -- accel/accel.sh@19 -- # IFS=: 00:10:48.716 16:03:18 -- accel/accel.sh@19 -- # read -r var val 00:10:48.716 16:03:18 -- accel/accel.sh@20 -- # val=1 00:10:48.716 16:03:18 -- accel/accel.sh@21 -- # case "$var" in 00:10:48.716 16:03:18 -- accel/accel.sh@19 -- # IFS=: 00:10:48.716 16:03:18 -- accel/accel.sh@19 -- # read -r var val 00:10:48.716 16:03:18 -- accel/accel.sh@20 -- # val='1 seconds' 00:10:48.716 16:03:18 -- accel/accel.sh@21 -- # case "$var" in 00:10:48.716 16:03:18 -- accel/accel.sh@19 -- # IFS=: 00:10:48.716 16:03:18 -- accel/accel.sh@19 -- # read -r var val 00:10:48.716 16:03:18 -- accel/accel.sh@20 -- # val=Yes 00:10:48.716 16:03:18 -- accel/accel.sh@21 -- # case "$var" in 00:10:48.716 16:03:18 -- accel/accel.sh@19 -- # IFS=: 00:10:48.716 16:03:18 -- accel/accel.sh@19 -- # read -r var val 00:10:48.716 16:03:18 -- accel/accel.sh@20 -- # val= 00:10:48.716 16:03:18 -- accel/accel.sh@21 -- # case "$var" in 00:10:48.716 16:03:18 -- accel/accel.sh@19 -- # IFS=: 00:10:48.716 16:03:18 -- accel/accel.sh@19 -- # read -r var val 00:10:48.716 16:03:18 -- accel/accel.sh@20 -- # val= 00:10:48.716 16:03:18 -- accel/accel.sh@21 -- # case "$var" in 00:10:48.716 16:03:18 -- accel/accel.sh@19 -- # IFS=: 00:10:48.716 16:03:18 -- accel/accel.sh@19 -- # read -r var val 00:10:49.663 16:03:19 -- accel/accel.sh@20 -- # val= 00:10:49.663 16:03:19 -- accel/accel.sh@21 -- # case "$var" in 00:10:49.663 16:03:19 -- accel/accel.sh@19 -- # IFS=: 00:10:49.663 16:03:19 -- accel/accel.sh@19 -- # read -r var val 00:10:49.663 16:03:19 -- accel/accel.sh@20 -- # val= 00:10:49.663 16:03:19 -- accel/accel.sh@21 -- # case "$var" in 00:10:49.663 16:03:19 -- accel/accel.sh@19 -- # IFS=: 00:10:49.663 16:03:19 -- accel/accel.sh@19 -- # read -r var val 00:10:49.663 16:03:19 -- accel/accel.sh@20 -- # val= 00:10:49.663 16:03:19 -- accel/accel.sh@21 -- # case "$var" in 00:10:49.663 16:03:19 -- accel/accel.sh@19 -- # IFS=: 00:10:49.663 16:03:19 -- accel/accel.sh@19 -- # read -r var val 00:10:49.663 16:03:19 -- accel/accel.sh@20 -- # val= 00:10:49.663 16:03:19 -- accel/accel.sh@21 -- # case "$var" in 00:10:49.663 16:03:19 -- accel/accel.sh@19 -- # IFS=: 00:10:49.663 16:03:19 -- accel/accel.sh@19 -- # read -r var val 00:10:49.663 16:03:19 -- accel/accel.sh@20 -- # val= 00:10:49.663 16:03:19 -- accel/accel.sh@21 -- # case "$var" in 00:10:49.663 16:03:19 -- accel/accel.sh@19 -- # IFS=: 00:10:49.663 16:03:19 -- accel/accel.sh@19 -- # read -r var val 00:10:49.663 16:03:19 -- accel/accel.sh@20 -- # val= 00:10:49.663 16:03:19 -- accel/accel.sh@21 -- # case "$var" in 00:10:49.663 16:03:19 -- accel/accel.sh@19 -- # IFS=: 00:10:49.663 16:03:19 -- accel/accel.sh@19 -- # read -r var val 00:10:49.663 16:03:19 -- accel/accel.sh@20 -- # val= 00:10:49.663 16:03:19 -- accel/accel.sh@21 -- # case "$var" in 00:10:49.663 16:03:19 -- accel/accel.sh@19 -- # IFS=: 00:10:49.663 16:03:19 -- accel/accel.sh@19 -- # read -r var val 00:10:49.663 16:03:19 -- accel/accel.sh@20 -- # val= 00:10:49.663 16:03:19 -- accel/accel.sh@21 -- # case "$var" in 00:10:49.663 16:03:19 -- accel/accel.sh@19 -- # IFS=: 00:10:49.663 16:03:19 -- accel/accel.sh@19 -- # read -r var val 00:10:49.663 16:03:19 -- accel/accel.sh@20 -- # val= 00:10:49.663 16:03:19 -- accel/accel.sh@21 -- # case "$var" in 00:10:49.663 16:03:19 -- accel/accel.sh@19 -- # IFS=: 00:10:49.663 16:03:19 -- accel/accel.sh@19 -- # read -r var val 00:10:49.663 ************************************ 00:10:49.663 END TEST accel_decomp_mcore 00:10:49.663 ************************************ 00:10:49.663 16:03:19 -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:49.663 16:03:19 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:49.663 16:03:19 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:49.663 00:10:49.663 real 0m1.406s 00:10:49.663 user 0m0.013s 00:10:49.663 sys 0m0.004s 00:10:49.664 16:03:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:49.664 16:03:19 -- common/autotest_common.sh@10 -- # set +x 00:10:49.664 16:03:19 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:49.664 16:03:19 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:10:49.922 16:03:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:49.922 16:03:19 -- common/autotest_common.sh@10 -- # set +x 00:10:49.922 ************************************ 00:10:49.922 START TEST accel_decomp_full_mcore 00:10:49.922 ************************************ 00:10:49.922 16:03:19 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:49.922 16:03:19 -- accel/accel.sh@16 -- # local accel_opc 00:10:49.922 16:03:19 -- accel/accel.sh@17 -- # local accel_module 00:10:49.922 16:03:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:49.922 16:03:19 -- accel/accel.sh@19 -- # IFS=: 00:10:49.922 16:03:19 -- accel/accel.sh@19 -- # read -r var val 00:10:49.922 16:03:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:49.922 16:03:19 -- accel/accel.sh@12 -- # build_accel_config 00:10:49.922 16:03:19 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:49.922 16:03:19 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:49.922 16:03:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:49.922 16:03:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:49.922 16:03:19 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:49.922 16:03:19 -- accel/accel.sh@40 -- # local IFS=, 00:10:49.923 16:03:19 -- accel/accel.sh@41 -- # jq -r . 00:10:49.923 [2024-04-15 16:03:19.739022] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:10:49.923 [2024-04-15 16:03:19.739267] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74302 ] 00:10:49.923 [2024-04-15 16:03:19.880404] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:50.181 [2024-04-15 16:03:19.946513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:50.181 [2024-04-15 16:03:19.946650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:50.181 [2024-04-15 16:03:19.946741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.181 [2024-04-15 16:03:19.946741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:50.181 [2024-04-15 16:03:19.948084] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:10:50.181 16:03:19 -- accel/accel.sh@20 -- # val= 00:10:50.181 16:03:19 -- accel/accel.sh@21 -- # case "$var" in 00:10:50.181 16:03:19 -- accel/accel.sh@19 -- # IFS=: 00:10:50.181 16:03:19 -- accel/accel.sh@19 -- # read -r var val 00:10:50.181 16:03:19 -- accel/accel.sh@20 -- # val= 00:10:50.181 16:03:19 -- accel/accel.sh@21 -- # case "$var" in 00:10:50.181 16:03:19 -- accel/accel.sh@19 -- # IFS=: 00:10:50.181 16:03:19 -- accel/accel.sh@19 -- # read -r var val 00:10:50.181 16:03:19 -- accel/accel.sh@20 -- # val= 00:10:50.181 16:03:19 -- accel/accel.sh@21 -- # case "$var" in 00:10:50.181 16:03:19 -- accel/accel.sh@19 -- # IFS=: 00:10:50.181 16:03:19 -- accel/accel.sh@19 -- # read -r var val 00:10:50.181 16:03:19 -- accel/accel.sh@20 -- # val=0xf 00:10:50.181 16:03:19 -- accel/accel.sh@21 -- # case "$var" in 00:10:50.181 16:03:19 -- accel/accel.sh@19 -- # IFS=: 00:10:50.181 16:03:19 -- accel/accel.sh@19 -- # read -r var val 00:10:50.181 16:03:19 -- accel/accel.sh@20 -- # val= 00:10:50.181 16:03:19 -- accel/accel.sh@21 -- # case "$var" in 00:10:50.181 16:03:19 -- accel/accel.sh@19 -- # IFS=: 00:10:50.181 16:03:19 -- accel/accel.sh@19 -- # read -r var val 00:10:50.181 16:03:19 -- accel/accel.sh@20 -- # val= 00:10:50.181 16:03:19 -- accel/accel.sh@21 -- # case "$var" in 00:10:50.181 16:03:19 -- accel/accel.sh@19 -- # IFS=: 00:10:50.181 16:03:19 -- accel/accel.sh@19 -- # read -r var val 00:10:50.181 16:03:19 -- accel/accel.sh@20 -- # val=decompress 00:10:50.181 16:03:19 -- accel/accel.sh@21 -- # case "$var" in 00:10:50.181 16:03:19 -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:50.181 16:03:19 -- accel/accel.sh@19 -- # IFS=: 00:10:50.181 16:03:19 -- accel/accel.sh@19 -- # read -r var val 00:10:50.181 16:03:19 -- accel/accel.sh@20 -- # val='111250 bytes' 00:10:50.181 16:03:19 -- accel/accel.sh@21 -- # case "$var" in 00:10:50.181 16:03:19 -- accel/accel.sh@19 -- # IFS=: 00:10:50.181 16:03:19 -- accel/accel.sh@19 -- # read -r var val 00:10:50.181 16:03:19 -- accel/accel.sh@20 -- # val= 00:10:50.181 16:03:19 -- accel/accel.sh@21 -- # case "$var" in 00:10:50.181 16:03:20 -- accel/accel.sh@19 -- # IFS=: 00:10:50.181 16:03:20 -- accel/accel.sh@19 -- # read -r var val 00:10:50.181 16:03:20 -- accel/accel.sh@20 -- # val=software 00:10:50.181 16:03:20 -- accel/accel.sh@21 -- # case "$var" in 00:10:50.181 16:03:20 -- accel/accel.sh@22 -- # accel_module=software 00:10:50.181 16:03:20 -- accel/accel.sh@19 -- # IFS=: 00:10:50.181 16:03:20 -- accel/accel.sh@19 -- # read -r var val 00:10:50.182 16:03:20 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:50.182 16:03:20 -- accel/accel.sh@21 -- # case "$var" in 00:10:50.182 16:03:20 -- accel/accel.sh@19 -- # IFS=: 00:10:50.182 16:03:20 -- accel/accel.sh@19 -- # read -r var val 00:10:50.182 16:03:20 -- accel/accel.sh@20 -- # val=32 00:10:50.182 16:03:20 -- accel/accel.sh@21 -- # case "$var" in 00:10:50.182 16:03:20 -- accel/accel.sh@19 -- # IFS=: 00:10:50.182 16:03:20 -- accel/accel.sh@19 -- # read -r var val 00:10:50.182 16:03:20 -- accel/accel.sh@20 -- # val=32 00:10:50.182 16:03:20 -- accel/accel.sh@21 -- # case "$var" in 00:10:50.182 16:03:20 -- accel/accel.sh@19 -- # IFS=: 00:10:50.182 16:03:20 -- accel/accel.sh@19 -- # read -r var val 00:10:50.182 16:03:20 -- accel/accel.sh@20 -- # val=1 00:10:50.182 16:03:20 -- accel/accel.sh@21 -- # case "$var" in 00:10:50.182 16:03:20 -- accel/accel.sh@19 -- # IFS=: 00:10:50.182 16:03:20 -- accel/accel.sh@19 -- # read -r var val 00:10:50.182 16:03:20 -- accel/accel.sh@20 -- # val='1 seconds' 00:10:50.182 16:03:20 -- accel/accel.sh@21 -- # case "$var" in 00:10:50.182 16:03:20 -- accel/accel.sh@19 -- # IFS=: 00:10:50.182 16:03:20 -- accel/accel.sh@19 -- # read -r var val 00:10:50.182 16:03:20 -- accel/accel.sh@20 -- # val=Yes 00:10:50.182 16:03:20 -- accel/accel.sh@21 -- # case "$var" in 00:10:50.182 16:03:20 -- accel/accel.sh@19 -- # IFS=: 00:10:50.182 16:03:20 -- accel/accel.sh@19 -- # read -r var val 00:10:50.182 16:03:20 -- accel/accel.sh@20 -- # val= 00:10:50.182 16:03:20 -- accel/accel.sh@21 -- # case "$var" in 00:10:50.182 16:03:20 -- accel/accel.sh@19 -- # IFS=: 00:10:50.182 16:03:20 -- accel/accel.sh@19 -- # read -r var val 00:10:50.182 16:03:20 -- accel/accel.sh@20 -- # val= 00:10:50.182 16:03:20 -- accel/accel.sh@21 -- # case "$var" in 00:10:50.182 16:03:20 -- accel/accel.sh@19 -- # IFS=: 00:10:50.182 16:03:20 -- accel/accel.sh@19 -- # read -r var val 00:10:51.558 16:03:21 -- accel/accel.sh@20 -- # val= 00:10:51.558 16:03:21 -- accel/accel.sh@21 -- # case "$var" in 00:10:51.558 16:03:21 -- accel/accel.sh@19 -- # IFS=: 00:10:51.558 16:03:21 -- accel/accel.sh@19 -- # read -r var val 00:10:51.558 16:03:21 -- accel/accel.sh@20 -- # val= 00:10:51.558 16:03:21 -- accel/accel.sh@21 -- # case "$var" in 00:10:51.558 16:03:21 -- accel/accel.sh@19 -- # IFS=: 00:10:51.558 16:03:21 -- accel/accel.sh@19 -- # read -r var val 00:10:51.558 16:03:21 -- accel/accel.sh@20 -- # val= 00:10:51.558 16:03:21 -- accel/accel.sh@21 -- # case "$var" in 00:10:51.558 16:03:21 -- accel/accel.sh@19 -- # IFS=: 00:10:51.558 16:03:21 -- accel/accel.sh@19 -- # read -r var val 00:10:51.558 16:03:21 -- accel/accel.sh@20 -- # val= 00:10:51.558 16:03:21 -- accel/accel.sh@21 -- # case "$var" in 00:10:51.558 16:03:21 -- accel/accel.sh@19 -- # IFS=: 00:10:51.558 16:03:21 -- accel/accel.sh@19 -- # read -r var val 00:10:51.558 16:03:21 -- accel/accel.sh@20 -- # val= 00:10:51.558 16:03:21 -- accel/accel.sh@21 -- # case "$var" in 00:10:51.558 16:03:21 -- accel/accel.sh@19 -- # IFS=: 00:10:51.558 16:03:21 -- accel/accel.sh@19 -- # read -r var val 00:10:51.558 16:03:21 -- accel/accel.sh@20 -- # val= 00:10:51.558 16:03:21 -- accel/accel.sh@21 -- # case "$var" in 00:10:51.558 16:03:21 -- accel/accel.sh@19 -- # IFS=: 00:10:51.558 16:03:21 -- accel/accel.sh@19 -- # read -r var val 00:10:51.558 16:03:21 -- accel/accel.sh@20 -- # val= 00:10:51.558 16:03:21 -- accel/accel.sh@21 -- # case "$var" in 00:10:51.558 16:03:21 -- accel/accel.sh@19 -- # IFS=: 00:10:51.558 16:03:21 -- accel/accel.sh@19 -- # read -r var val 00:10:51.558 16:03:21 -- accel/accel.sh@20 -- # val= 00:10:51.558 16:03:21 -- accel/accel.sh@21 -- # case "$var" in 00:10:51.558 16:03:21 -- accel/accel.sh@19 -- # IFS=: 00:10:51.558 16:03:21 -- accel/accel.sh@19 -- # read -r var val 00:10:51.558 16:03:21 -- accel/accel.sh@20 -- # val= 00:10:51.558 16:03:21 -- accel/accel.sh@21 -- # case "$var" in 00:10:51.558 16:03:21 -- accel/accel.sh@19 -- # IFS=: 00:10:51.558 16:03:21 -- accel/accel.sh@19 -- # read -r var val 00:10:51.558 16:03:21 -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:51.558 16:03:21 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:51.558 16:03:21 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:51.558 00:10:51.558 real 0m1.446s 00:10:51.558 user 0m4.608s 00:10:51.558 sys 0m0.124s 00:10:51.558 16:03:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:51.558 16:03:21 -- common/autotest_common.sh@10 -- # set +x 00:10:51.558 ************************************ 00:10:51.558 END TEST accel_decomp_full_mcore 00:10:51.558 ************************************ 00:10:51.558 16:03:21 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:10:51.558 16:03:21 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:10:51.558 16:03:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:51.558 16:03:21 -- common/autotest_common.sh@10 -- # set +x 00:10:51.558 ************************************ 00:10:51.558 START TEST accel_decomp_mthread 00:10:51.558 ************************************ 00:10:51.558 16:03:21 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:10:51.558 16:03:21 -- accel/accel.sh@16 -- # local accel_opc 00:10:51.558 16:03:21 -- accel/accel.sh@17 -- # local accel_module 00:10:51.558 16:03:21 -- accel/accel.sh@19 -- # IFS=: 00:10:51.558 16:03:21 -- accel/accel.sh@19 -- # read -r var val 00:10:51.558 16:03:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:10:51.558 16:03:21 -- accel/accel.sh@12 -- # build_accel_config 00:10:51.558 16:03:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:10:51.558 16:03:21 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:51.558 16:03:21 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:51.558 16:03:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:51.558 16:03:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:51.558 16:03:21 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:51.558 16:03:21 -- accel/accel.sh@40 -- # local IFS=, 00:10:51.558 16:03:21 -- accel/accel.sh@41 -- # jq -r . 00:10:51.558 [2024-04-15 16:03:21.307086] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:10:51.558 [2024-04-15 16:03:21.307354] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74344 ] 00:10:51.558 [2024-04-15 16:03:21.442491] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.558 [2024-04-15 16:03:21.497116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.558 [2024-04-15 16:03:21.498108] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:10:51.818 16:03:21 -- accel/accel.sh@20 -- # val= 00:10:51.818 16:03:21 -- accel/accel.sh@21 -- # case "$var" in 00:10:51.818 16:03:21 -- accel/accel.sh@19 -- # IFS=: 00:10:51.818 16:03:21 -- accel/accel.sh@19 -- # read -r var val 00:10:51.818 16:03:21 -- accel/accel.sh@20 -- # val= 00:10:51.818 16:03:21 -- accel/accel.sh@21 -- # case "$var" in 00:10:51.818 16:03:21 -- accel/accel.sh@19 -- # IFS=: 00:10:51.818 16:03:21 -- accel/accel.sh@19 -- # read -r var val 00:10:51.818 16:03:21 -- accel/accel.sh@20 -- # val= 00:10:51.818 16:03:21 -- accel/accel.sh@21 -- # case "$var" in 00:10:51.818 16:03:21 -- accel/accel.sh@19 -- # IFS=: 00:10:51.818 16:03:21 -- accel/accel.sh@19 -- # read -r var val 00:10:51.818 16:03:21 -- accel/accel.sh@20 -- # val=0x1 00:10:51.818 16:03:21 -- accel/accel.sh@21 -- # case "$var" in 00:10:51.818 16:03:21 -- accel/accel.sh@19 -- # IFS=: 00:10:51.818 16:03:21 -- accel/accel.sh@19 -- # read -r var val 00:10:51.818 16:03:21 -- accel/accel.sh@20 -- # val= 00:10:51.818 16:03:21 -- accel/accel.sh@21 -- # case "$var" in 00:10:51.818 16:03:21 -- accel/accel.sh@19 -- # IFS=: 00:10:51.818 16:03:21 -- accel/accel.sh@19 -- # read -r var val 00:10:51.818 16:03:21 -- accel/accel.sh@20 -- # val= 00:10:51.818 16:03:21 -- accel/accel.sh@21 -- # case "$var" in 00:10:51.818 16:03:21 -- accel/accel.sh@19 -- # IFS=: 00:10:51.818 16:03:21 -- accel/accel.sh@19 -- # read -r var val 00:10:51.818 16:03:21 -- accel/accel.sh@20 -- # val=decompress 00:10:51.818 16:03:21 -- accel/accel.sh@21 -- # case "$var" in 00:10:51.818 16:03:21 -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:51.818 16:03:21 -- accel/accel.sh@19 -- # IFS=: 00:10:51.818 16:03:21 -- accel/accel.sh@19 -- # read -r var val 00:10:51.818 16:03:21 -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:51.818 16:03:21 -- accel/accel.sh@21 -- # case "$var" in 00:10:51.818 16:03:21 -- accel/accel.sh@19 -- # IFS=: 00:10:51.818 16:03:21 -- accel/accel.sh@19 -- # read -r var val 00:10:51.818 16:03:21 -- accel/accel.sh@20 -- # val= 00:10:51.818 16:03:21 -- accel/accel.sh@21 -- # case "$var" in 00:10:51.818 16:03:21 -- accel/accel.sh@19 -- # IFS=: 00:10:51.818 16:03:21 -- accel/accel.sh@19 -- # read -r var val 00:10:51.818 16:03:21 -- accel/accel.sh@20 -- # val=software 00:10:51.818 16:03:21 -- accel/accel.sh@21 -- # case "$var" in 00:10:51.818 16:03:21 -- accel/accel.sh@22 -- # accel_module=software 00:10:51.818 16:03:21 -- accel/accel.sh@19 -- # IFS=: 00:10:51.818 16:03:21 -- accel/accel.sh@19 -- # read -r var val 00:10:51.818 16:03:21 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:51.818 16:03:21 -- accel/accel.sh@21 -- # case "$var" in 00:10:51.818 16:03:21 -- accel/accel.sh@19 -- # IFS=: 00:10:51.818 16:03:21 -- accel/accel.sh@19 -- # read -r var val 00:10:51.818 16:03:21 -- accel/accel.sh@20 -- # val=32 00:10:51.818 16:03:21 -- accel/accel.sh@21 -- # case "$var" in 00:10:51.818 16:03:21 -- accel/accel.sh@19 -- # IFS=: 00:10:51.818 16:03:21 -- accel/accel.sh@19 -- # read -r var val 00:10:51.818 16:03:21 -- accel/accel.sh@20 -- # val=32 00:10:51.818 16:03:21 -- accel/accel.sh@21 -- # case "$var" in 00:10:51.818 16:03:21 -- accel/accel.sh@19 -- # IFS=: 00:10:51.818 16:03:21 -- accel/accel.sh@19 -- # read -r var val 00:10:51.818 16:03:21 -- accel/accel.sh@20 -- # val=2 00:10:51.818 16:03:21 -- accel/accel.sh@21 -- # case "$var" in 00:10:51.818 16:03:21 -- accel/accel.sh@19 -- # IFS=: 00:10:51.818 16:03:21 -- accel/accel.sh@19 -- # read -r var val 00:10:51.818 16:03:21 -- accel/accel.sh@20 -- # val='1 seconds' 00:10:51.818 16:03:21 -- accel/accel.sh@21 -- # case "$var" in 00:10:51.818 16:03:21 -- accel/accel.sh@19 -- # IFS=: 00:10:51.818 16:03:21 -- accel/accel.sh@19 -- # read -r var val 00:10:51.818 16:03:21 -- accel/accel.sh@20 -- # val=Yes 00:10:51.818 16:03:21 -- accel/accel.sh@21 -- # case "$var" in 00:10:51.818 16:03:21 -- accel/accel.sh@19 -- # IFS=: 00:10:51.818 16:03:21 -- accel/accel.sh@19 -- # read -r var val 00:10:51.818 16:03:21 -- accel/accel.sh@20 -- # val= 00:10:51.818 16:03:21 -- accel/accel.sh@21 -- # case "$var" in 00:10:51.818 16:03:21 -- accel/accel.sh@19 -- # IFS=: 00:10:51.818 16:03:21 -- accel/accel.sh@19 -- # read -r var val 00:10:51.818 16:03:21 -- accel/accel.sh@20 -- # val= 00:10:51.818 16:03:21 -- accel/accel.sh@21 -- # case "$var" in 00:10:51.818 16:03:21 -- accel/accel.sh@19 -- # IFS=: 00:10:51.818 16:03:21 -- accel/accel.sh@19 -- # read -r var val 00:10:52.754 16:03:22 -- accel/accel.sh@20 -- # val= 00:10:52.754 16:03:22 -- accel/accel.sh@21 -- # case "$var" in 00:10:52.754 16:03:22 -- accel/accel.sh@19 -- # IFS=: 00:10:52.754 16:03:22 -- accel/accel.sh@19 -- # read -r var val 00:10:52.754 16:03:22 -- accel/accel.sh@20 -- # val= 00:10:52.754 16:03:22 -- accel/accel.sh@21 -- # case "$var" in 00:10:52.754 16:03:22 -- accel/accel.sh@19 -- # IFS=: 00:10:52.754 16:03:22 -- accel/accel.sh@19 -- # read -r var val 00:10:52.754 16:03:22 -- accel/accel.sh@20 -- # val= 00:10:52.754 16:03:22 -- accel/accel.sh@21 -- # case "$var" in 00:10:52.754 16:03:22 -- accel/accel.sh@19 -- # IFS=: 00:10:52.754 16:03:22 -- accel/accel.sh@19 -- # read -r var val 00:10:52.754 16:03:22 -- accel/accel.sh@20 -- # val= 00:10:52.754 16:03:22 -- accel/accel.sh@21 -- # case "$var" in 00:10:52.754 16:03:22 -- accel/accel.sh@19 -- # IFS=: 00:10:52.754 16:03:22 -- accel/accel.sh@19 -- # read -r var val 00:10:52.754 16:03:22 -- accel/accel.sh@20 -- # val= 00:10:52.754 16:03:22 -- accel/accel.sh@21 -- # case "$var" in 00:10:52.754 16:03:22 -- accel/accel.sh@19 -- # IFS=: 00:10:52.754 16:03:22 -- accel/accel.sh@19 -- # read -r var val 00:10:52.754 16:03:22 -- accel/accel.sh@20 -- # val= 00:10:52.754 16:03:22 -- accel/accel.sh@21 -- # case "$var" in 00:10:52.754 16:03:22 -- accel/accel.sh@19 -- # IFS=: 00:10:52.754 16:03:22 -- accel/accel.sh@19 -- # read -r var val 00:10:52.754 16:03:22 -- accel/accel.sh@20 -- # val= 00:10:52.754 16:03:22 -- accel/accel.sh@21 -- # case "$var" in 00:10:52.754 16:03:22 -- accel/accel.sh@19 -- # IFS=: 00:10:52.754 16:03:22 -- accel/accel.sh@19 -- # read -r var val 00:10:52.754 16:03:22 -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:52.754 16:03:22 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:52.754 16:03:22 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:52.754 00:10:52.754 real 0m1.404s 00:10:52.754 user 0m1.204s 00:10:52.754 sys 0m0.102s 00:10:52.754 16:03:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:52.754 16:03:22 -- common/autotest_common.sh@10 -- # set +x 00:10:52.754 ************************************ 00:10:52.754 END TEST accel_decomp_mthread 00:10:52.754 ************************************ 00:10:53.011 16:03:22 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:10:53.011 16:03:22 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:10:53.011 16:03:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:53.011 16:03:22 -- common/autotest_common.sh@10 -- # set +x 00:10:53.011 ************************************ 00:10:53.011 START TEST accel_deomp_full_mthread 00:10:53.011 ************************************ 00:10:53.011 16:03:22 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:10:53.011 16:03:22 -- accel/accel.sh@16 -- # local accel_opc 00:10:53.011 16:03:22 -- accel/accel.sh@17 -- # local accel_module 00:10:53.011 16:03:22 -- accel/accel.sh@19 -- # IFS=: 00:10:53.011 16:03:22 -- accel/accel.sh@19 -- # read -r var val 00:10:53.011 16:03:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:10:53.011 16:03:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:10:53.011 16:03:22 -- accel/accel.sh@12 -- # build_accel_config 00:10:53.011 16:03:22 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:53.011 16:03:22 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:53.011 16:03:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:53.011 16:03:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:53.011 16:03:22 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:53.012 16:03:22 -- accel/accel.sh@40 -- # local IFS=, 00:10:53.012 16:03:22 -- accel/accel.sh@41 -- # jq -r . 00:10:53.012 [2024-04-15 16:03:22.852652] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:10:53.012 [2024-04-15 16:03:22.852884] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74388 ] 00:10:53.270 [2024-04-15 16:03:22.998508] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.270 [2024-04-15 16:03:23.053689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.270 [2024-04-15 16:03:23.054646] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:10:53.270 16:03:23 -- accel/accel.sh@20 -- # val= 00:10:53.270 16:03:23 -- accel/accel.sh@21 -- # case "$var" in 00:10:53.270 16:03:23 -- accel/accel.sh@19 -- # IFS=: 00:10:53.270 16:03:23 -- accel/accel.sh@19 -- # read -r var val 00:10:53.270 16:03:23 -- accel/accel.sh@20 -- # val= 00:10:53.270 16:03:23 -- accel/accel.sh@21 -- # case "$var" in 00:10:53.270 16:03:23 -- accel/accel.sh@19 -- # IFS=: 00:10:53.270 16:03:23 -- accel/accel.sh@19 -- # read -r var val 00:10:53.270 16:03:23 -- accel/accel.sh@20 -- # val= 00:10:53.270 16:03:23 -- accel/accel.sh@21 -- # case "$var" in 00:10:53.270 16:03:23 -- accel/accel.sh@19 -- # IFS=: 00:10:53.270 16:03:23 -- accel/accel.sh@19 -- # read -r var val 00:10:53.270 16:03:23 -- accel/accel.sh@20 -- # val=0x1 00:10:53.270 16:03:23 -- accel/accel.sh@21 -- # case "$var" in 00:10:53.270 16:03:23 -- accel/accel.sh@19 -- # IFS=: 00:10:53.270 16:03:23 -- accel/accel.sh@19 -- # read -r var val 00:10:53.270 16:03:23 -- accel/accel.sh@20 -- # val= 00:10:53.270 16:03:23 -- accel/accel.sh@21 -- # case "$var" in 00:10:53.270 16:03:23 -- accel/accel.sh@19 -- # IFS=: 00:10:53.270 16:03:23 -- accel/accel.sh@19 -- # read -r var val 00:10:53.270 16:03:23 -- accel/accel.sh@20 -- # val= 00:10:53.270 16:03:23 -- accel/accel.sh@21 -- # case "$var" in 00:10:53.270 16:03:23 -- accel/accel.sh@19 -- # IFS=: 00:10:53.270 16:03:23 -- accel/accel.sh@19 -- # read -r var val 00:10:53.270 16:03:23 -- accel/accel.sh@20 -- # val=decompress 00:10:53.270 16:03:23 -- accel/accel.sh@21 -- # case "$var" in 00:10:53.270 16:03:23 -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:53.270 16:03:23 -- accel/accel.sh@19 -- # IFS=: 00:10:53.270 16:03:23 -- accel/accel.sh@19 -- # read -r var val 00:10:53.270 16:03:23 -- accel/accel.sh@20 -- # val='111250 bytes' 00:10:53.270 16:03:23 -- accel/accel.sh@21 -- # case "$var" in 00:10:53.270 16:03:23 -- accel/accel.sh@19 -- # IFS=: 00:10:53.270 16:03:23 -- accel/accel.sh@19 -- # read -r var val 00:10:53.270 16:03:23 -- accel/accel.sh@20 -- # val= 00:10:53.270 16:03:23 -- accel/accel.sh@21 -- # case "$var" in 00:10:53.270 16:03:23 -- accel/accel.sh@19 -- # IFS=: 00:10:53.270 16:03:23 -- accel/accel.sh@19 -- # read -r var val 00:10:53.270 16:03:23 -- accel/accel.sh@20 -- # val=software 00:10:53.270 16:03:23 -- accel/accel.sh@21 -- # case "$var" in 00:10:53.270 16:03:23 -- accel/accel.sh@22 -- # accel_module=software 00:10:53.270 16:03:23 -- accel/accel.sh@19 -- # IFS=: 00:10:53.270 16:03:23 -- accel/accel.sh@19 -- # read -r var val 00:10:53.270 16:03:23 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:53.270 16:03:23 -- accel/accel.sh@21 -- # case "$var" in 00:10:53.270 16:03:23 -- accel/accel.sh@19 -- # IFS=: 00:10:53.270 16:03:23 -- accel/accel.sh@19 -- # read -r var val 00:10:53.270 16:03:23 -- accel/accel.sh@20 -- # val=32 00:10:53.270 16:03:23 -- accel/accel.sh@21 -- # case "$var" in 00:10:53.270 16:03:23 -- accel/accel.sh@19 -- # IFS=: 00:10:53.270 16:03:23 -- accel/accel.sh@19 -- # read -r var val 00:10:53.270 16:03:23 -- accel/accel.sh@20 -- # val=32 00:10:53.270 16:03:23 -- accel/accel.sh@21 -- # case "$var" in 00:10:53.270 16:03:23 -- accel/accel.sh@19 -- # IFS=: 00:10:53.270 16:03:23 -- accel/accel.sh@19 -- # read -r var val 00:10:53.270 16:03:23 -- accel/accel.sh@20 -- # val=2 00:10:53.270 16:03:23 -- accel/accel.sh@21 -- # case "$var" in 00:10:53.270 16:03:23 -- accel/accel.sh@19 -- # IFS=: 00:10:53.270 16:03:23 -- accel/accel.sh@19 -- # read -r var val 00:10:53.270 16:03:23 -- accel/accel.sh@20 -- # val='1 seconds' 00:10:53.270 16:03:23 -- accel/accel.sh@21 -- # case "$var" in 00:10:53.270 16:03:23 -- accel/accel.sh@19 -- # IFS=: 00:10:53.270 16:03:23 -- accel/accel.sh@19 -- # read -r var val 00:10:53.270 16:03:23 -- accel/accel.sh@20 -- # val=Yes 00:10:53.270 16:03:23 -- accel/accel.sh@21 -- # case "$var" in 00:10:53.270 16:03:23 -- accel/accel.sh@19 -- # IFS=: 00:10:53.270 16:03:23 -- accel/accel.sh@19 -- # read -r var val 00:10:53.270 16:03:23 -- accel/accel.sh@20 -- # val= 00:10:53.270 16:03:23 -- accel/accel.sh@21 -- # case "$var" in 00:10:53.270 16:03:23 -- accel/accel.sh@19 -- # IFS=: 00:10:53.270 16:03:23 -- accel/accel.sh@19 -- # read -r var val 00:10:53.270 16:03:23 -- accel/accel.sh@20 -- # val= 00:10:53.270 16:03:23 -- accel/accel.sh@21 -- # case "$var" in 00:10:53.270 16:03:23 -- accel/accel.sh@19 -- # IFS=: 00:10:53.270 16:03:23 -- accel/accel.sh@19 -- # read -r var val 00:10:54.645 16:03:24 -- accel/accel.sh@20 -- # val= 00:10:54.645 16:03:24 -- accel/accel.sh@21 -- # case "$var" in 00:10:54.645 16:03:24 -- accel/accel.sh@19 -- # IFS=: 00:10:54.645 16:03:24 -- accel/accel.sh@19 -- # read -r var val 00:10:54.645 16:03:24 -- accel/accel.sh@20 -- # val= 00:10:54.645 16:03:24 -- accel/accel.sh@21 -- # case "$var" in 00:10:54.645 16:03:24 -- accel/accel.sh@19 -- # IFS=: 00:10:54.645 16:03:24 -- accel/accel.sh@19 -- # read -r var val 00:10:54.645 16:03:24 -- accel/accel.sh@20 -- # val= 00:10:54.645 16:03:24 -- accel/accel.sh@21 -- # case "$var" in 00:10:54.645 16:03:24 -- accel/accel.sh@19 -- # IFS=: 00:10:54.645 16:03:24 -- accel/accel.sh@19 -- # read -r var val 00:10:54.645 16:03:24 -- accel/accel.sh@20 -- # val= 00:10:54.645 16:03:24 -- accel/accel.sh@21 -- # case "$var" in 00:10:54.645 16:03:24 -- accel/accel.sh@19 -- # IFS=: 00:10:54.645 16:03:24 -- accel/accel.sh@19 -- # read -r var val 00:10:54.645 16:03:24 -- accel/accel.sh@20 -- # val= 00:10:54.645 16:03:24 -- accel/accel.sh@21 -- # case "$var" in 00:10:54.645 16:03:24 -- accel/accel.sh@19 -- # IFS=: 00:10:54.645 16:03:24 -- accel/accel.sh@19 -- # read -r var val 00:10:54.645 16:03:24 -- accel/accel.sh@20 -- # val= 00:10:54.645 16:03:24 -- accel/accel.sh@21 -- # case "$var" in 00:10:54.645 16:03:24 -- accel/accel.sh@19 -- # IFS=: 00:10:54.645 16:03:24 -- accel/accel.sh@19 -- # read -r var val 00:10:54.645 16:03:24 -- accel/accel.sh@20 -- # val= 00:10:54.645 16:03:24 -- accel/accel.sh@21 -- # case "$var" in 00:10:54.645 16:03:24 -- accel/accel.sh@19 -- # IFS=: 00:10:54.645 16:03:24 -- accel/accel.sh@19 -- # read -r var val 00:10:54.645 16:03:24 -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:54.645 16:03:24 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:54.645 16:03:24 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:54.645 00:10:54.645 real 0m1.451s 00:10:54.645 user 0m1.239s 00:10:54.645 sys 0m0.114s 00:10:54.645 16:03:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:54.645 16:03:24 -- common/autotest_common.sh@10 -- # set +x 00:10:54.645 ************************************ 00:10:54.645 END TEST accel_deomp_full_mthread 00:10:54.645 ************************************ 00:10:54.645 16:03:24 -- accel/accel.sh@124 -- # [[ n == y ]] 00:10:54.645 16:03:24 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:10:54.645 16:03:24 -- accel/accel.sh@137 -- # build_accel_config 00:10:54.645 16:03:24 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:10:54.645 16:03:24 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:54.645 16:03:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:54.645 16:03:24 -- common/autotest_common.sh@10 -- # set +x 00:10:54.645 16:03:24 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:54.645 16:03:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:54.645 16:03:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:54.645 16:03:24 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:54.645 16:03:24 -- accel/accel.sh@40 -- # local IFS=, 00:10:54.645 16:03:24 -- accel/accel.sh@41 -- # jq -r . 00:10:54.645 ************************************ 00:10:54.645 START TEST accel_dif_functional_tests 00:10:54.645 ************************************ 00:10:54.645 16:03:24 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:10:54.645 [2024-04-15 16:03:24.441751] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:10:54.645 [2024-04-15 16:03:24.442007] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74422 ] 00:10:54.645 [2024-04-15 16:03:24.583138] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:54.904 [2024-04-15 16:03:24.633754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:54.904 [2024-04-15 16:03:24.633868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.904 [2024-04-15 16:03:24.633868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:54.904 [2024-04-15 16:03:24.634869] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:10:54.904 00:10:54.904 00:10:54.904 CUnit - A unit testing framework for C - Version 2.1-3 00:10:54.904 http://cunit.sourceforge.net/ 00:10:54.904 00:10:54.904 00:10:54.904 Suite: accel_dif 00:10:54.904 Test: verify: DIF generated, GUARD check ...passed 00:10:54.904 Test: verify: DIF generated, APPTAG check ...passed 00:10:54.904 Test: verify: DIF generated, REFTAG check ...passed 00:10:54.904 Test: verify: DIF not generated, GUARD check ...[2024-04-15 16:03:24.704888] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:10:54.904 passed[2024-04-15 16:03:24.705035] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:10:54.904 00:10:54.904 Test: verify: DIF not generated, APPTAG check ...[2024-04-15 16:03:24.705263] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:10:54.904 [2024-04-15 16:03:24.705340] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:10:54.904 passed 00:10:54.904 Test: verify: DIF not generated, REFTAG check ...[2024-04-15 16:03:24.705532] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:10:54.904 [2024-04-15 16:03:24.705660] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:10:54.904 passed 00:10:54.904 Test: verify: APPTAG correct, APPTAG check ...passed 00:10:54.904 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:10:54.904 Test: verify: APPTAG incorrect, no APPTAG check ...[2024-04-15 16:03:24.705955] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:10:54.904 passed 00:10:54.904 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:10:54.904 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:10:54.904 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-04-15 16:03:24.706360] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:10:54.904 passed 00:10:54.904 Test: generate copy: DIF generated, GUARD check ...passed 00:10:54.904 Test: generate copy: DIF generated, APTTAG check ...passed 00:10:54.904 Test: generate copy: DIF generated, REFTAG check ...passed 00:10:54.904 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:10:54.904 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:10:54.904 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:10:54.904 Test: generate copy: iovecs-len validate ...[2024-04-15 16:03:24.707257] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned passed 00:10:54.904 Test: generate copy: buffer alignment validate ...with block_size. 00:10:54.904 passed 00:10:54.904 00:10:54.904 Run Summary: Type Total Ran Passed Failed Inactive 00:10:54.904 suites 1 1 n/a 0 0 00:10:54.904 tests 20 20 20 0 0 00:10:54.904 asserts 204 204 204 0 n/a 00:10:54.904 00:10:54.905 Elapsed time = 0.008 seconds 00:10:55.163 00:10:55.163 real 0m0.487s 00:10:55.163 user 0m0.581s 00:10:55.163 sys 0m0.127s 00:10:55.163 16:03:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:55.163 ************************************ 00:10:55.163 16:03:24 -- common/autotest_common.sh@10 -- # set +x 00:10:55.163 END TEST accel_dif_functional_tests 00:10:55.163 ************************************ 00:10:55.163 00:10:55.163 real 0m34.074s 00:10:55.163 user 0m34.121s 00:10:55.163 sys 0m4.735s 00:10:55.163 16:03:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:55.163 ************************************ 00:10:55.163 END TEST accel 00:10:55.163 ************************************ 00:10:55.163 16:03:24 -- common/autotest_common.sh@10 -- # set +x 00:10:55.163 16:03:24 -- spdk/autotest.sh@180 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:10:55.163 16:03:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:55.163 16:03:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:55.163 16:03:24 -- common/autotest_common.sh@10 -- # set +x 00:10:55.163 ************************************ 00:10:55.163 START TEST accel_rpc 00:10:55.163 ************************************ 00:10:55.163 16:03:25 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:10:55.423 * Looking for test storage... 00:10:55.423 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:10:55.423 16:03:25 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:55.423 16:03:25 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=74499 00:10:55.423 16:03:25 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:10:55.423 16:03:25 -- accel/accel_rpc.sh@15 -- # waitforlisten 74499 00:10:55.423 16:03:25 -- common/autotest_common.sh@817 -- # '[' -z 74499 ']' 00:10:55.423 16:03:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.423 16:03:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:55.423 16:03:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.423 16:03:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:55.423 16:03:25 -- common/autotest_common.sh@10 -- # set +x 00:10:55.423 [2024-04-15 16:03:25.203626] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:10:55.423 [2024-04-15 16:03:25.204175] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74499 ] 00:10:55.423 [2024-04-15 16:03:25.350248] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.682 [2024-04-15 16:03:25.397619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.616 16:03:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:56.616 16:03:26 -- common/autotest_common.sh@850 -- # return 0 00:10:56.616 16:03:26 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:10:56.616 16:03:26 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:10:56.616 16:03:26 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:10:56.616 16:03:26 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:10:56.616 16:03:26 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:10:56.616 16:03:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:56.616 16:03:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:56.616 16:03:26 -- common/autotest_common.sh@10 -- # set +x 00:10:56.616 ************************************ 00:10:56.616 START TEST accel_assign_opcode 00:10:56.616 ************************************ 00:10:56.616 16:03:26 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:10:56.616 16:03:26 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:10:56.616 16:03:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:56.616 16:03:26 -- common/autotest_common.sh@10 -- # set +x 00:10:56.616 [2024-04-15 16:03:26.310344] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:10:56.616 16:03:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:56.616 16:03:26 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:10:56.616 16:03:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:56.616 16:03:26 -- common/autotest_common.sh@10 -- # set +x 00:10:56.616 [2024-04-15 16:03:26.322342] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:10:56.616 16:03:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:56.616 16:03:26 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:10:56.616 16:03:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:56.616 16:03:26 -- common/autotest_common.sh@10 -- # set +x 00:10:56.616 16:03:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:56.616 16:03:26 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:10:56.616 16:03:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:56.616 16:03:26 -- accel/accel_rpc.sh@42 -- # grep software 00:10:56.616 16:03:26 -- common/autotest_common.sh@10 -- # set +x 00:10:56.616 16:03:26 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:10:56.616 16:03:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:56.616 software 00:10:56.616 00:10:56.616 real 0m0.262s 00:10:56.616 user 0m0.039s 00:10:56.616 sys 0m0.015s 00:10:56.616 16:03:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:56.616 16:03:26 -- common/autotest_common.sh@10 -- # set +x 00:10:56.616 ************************************ 00:10:56.616 END TEST accel_assign_opcode 00:10:56.616 ************************************ 00:10:56.874 16:03:26 -- accel/accel_rpc.sh@55 -- # killprocess 74499 00:10:56.874 16:03:26 -- common/autotest_common.sh@936 -- # '[' -z 74499 ']' 00:10:56.874 16:03:26 -- common/autotest_common.sh@940 -- # kill -0 74499 00:10:56.874 16:03:26 -- common/autotest_common.sh@941 -- # uname 00:10:56.874 16:03:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:56.874 16:03:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74499 00:10:56.874 killing process with pid 74499 00:10:56.874 16:03:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:56.874 16:03:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:56.874 16:03:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74499' 00:10:56.874 16:03:26 -- common/autotest_common.sh@955 -- # kill 74499 00:10:56.874 16:03:26 -- common/autotest_common.sh@960 -- # wait 74499 00:10:57.201 00:10:57.201 real 0m1.936s 00:10:57.201 user 0m2.087s 00:10:57.201 sys 0m0.466s 00:10:57.201 16:03:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:57.201 ************************************ 00:10:57.201 END TEST accel_rpc 00:10:57.201 ************************************ 00:10:57.201 16:03:26 -- common/autotest_common.sh@10 -- # set +x 00:10:57.201 16:03:27 -- spdk/autotest.sh@181 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:10:57.201 16:03:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:57.201 16:03:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:57.201 16:03:27 -- common/autotest_common.sh@10 -- # set +x 00:10:57.201 ************************************ 00:10:57.201 START TEST app_cmdline 00:10:57.201 ************************************ 00:10:57.201 16:03:27 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:10:57.459 * Looking for test storage... 00:10:57.459 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:10:57.460 16:03:27 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:10:57.460 16:03:27 -- app/cmdline.sh@17 -- # spdk_tgt_pid=74601 00:10:57.460 16:03:27 -- app/cmdline.sh@18 -- # waitforlisten 74601 00:10:57.460 16:03:27 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:10:57.460 16:03:27 -- common/autotest_common.sh@817 -- # '[' -z 74601 ']' 00:10:57.460 16:03:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:57.460 16:03:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:57.460 16:03:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:57.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:57.460 16:03:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:57.460 16:03:27 -- common/autotest_common.sh@10 -- # set +x 00:10:57.460 [2024-04-15 16:03:27.230601] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:10:57.460 [2024-04-15 16:03:27.230906] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74601 ] 00:10:57.460 [2024-04-15 16:03:27.367864] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.460 [2024-04-15 16:03:27.417270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.396 16:03:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:58.396 16:03:28 -- common/autotest_common.sh@850 -- # return 0 00:10:58.396 16:03:28 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:10:58.655 { 00:10:58.655 "version": "SPDK v24.05-pre git sha1 26d44a121", 00:10:58.655 "fields": { 00:10:58.655 "major": 24, 00:10:58.655 "minor": 5, 00:10:58.655 "patch": 0, 00:10:58.655 "suffix": "-pre", 00:10:58.655 "commit": "26d44a121" 00:10:58.655 } 00:10:58.655 } 00:10:58.655 16:03:28 -- app/cmdline.sh@22 -- # expected_methods=() 00:10:58.655 16:03:28 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:10:58.655 16:03:28 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:10:58.655 16:03:28 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:10:58.655 16:03:28 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:10:58.655 16:03:28 -- app/cmdline.sh@26 -- # sort 00:10:58.655 16:03:28 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:10:58.655 16:03:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:58.655 16:03:28 -- common/autotest_common.sh@10 -- # set +x 00:10:58.655 16:03:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:58.655 16:03:28 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:10:58.655 16:03:28 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:10:58.655 16:03:28 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:58.655 16:03:28 -- common/autotest_common.sh@638 -- # local es=0 00:10:58.655 16:03:28 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:58.655 16:03:28 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:58.655 16:03:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:58.655 16:03:28 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:58.655 16:03:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:58.655 16:03:28 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:58.655 16:03:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:58.655 16:03:28 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:58.655 16:03:28 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:58.655 16:03:28 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:58.917 request: 00:10:58.917 { 00:10:58.917 "method": "env_dpdk_get_mem_stats", 00:10:58.917 "req_id": 1 00:10:58.917 } 00:10:58.917 Got JSON-RPC error response 00:10:58.917 response: 00:10:58.917 { 00:10:58.917 "code": -32601, 00:10:58.917 "message": "Method not found" 00:10:58.917 } 00:10:58.917 16:03:28 -- common/autotest_common.sh@641 -- # es=1 00:10:58.917 16:03:28 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:10:58.917 16:03:28 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:10:58.917 16:03:28 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:10:58.917 16:03:28 -- app/cmdline.sh@1 -- # killprocess 74601 00:10:58.917 16:03:28 -- common/autotest_common.sh@936 -- # '[' -z 74601 ']' 00:10:58.917 16:03:28 -- common/autotest_common.sh@940 -- # kill -0 74601 00:10:58.917 16:03:28 -- common/autotest_common.sh@941 -- # uname 00:10:58.917 16:03:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:58.917 16:03:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74601 00:10:58.917 16:03:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:58.917 16:03:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:58.917 16:03:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74601' 00:10:58.917 killing process with pid 74601 00:10:58.917 16:03:28 -- common/autotest_common.sh@955 -- # kill 74601 00:10:58.917 16:03:28 -- common/autotest_common.sh@960 -- # wait 74601 00:10:59.177 ************************************ 00:10:59.177 END TEST app_cmdline 00:10:59.177 ************************************ 00:10:59.177 00:10:59.177 real 0m2.019s 00:10:59.177 user 0m2.496s 00:10:59.177 sys 0m0.481s 00:10:59.177 16:03:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:59.177 16:03:29 -- common/autotest_common.sh@10 -- # set +x 00:10:59.435 16:03:29 -- spdk/autotest.sh@182 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:10:59.435 16:03:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:59.435 16:03:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:59.435 16:03:29 -- common/autotest_common.sh@10 -- # set +x 00:10:59.435 ************************************ 00:10:59.435 START TEST version 00:10:59.435 ************************************ 00:10:59.435 16:03:29 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:10:59.435 * Looking for test storage... 00:10:59.435 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:10:59.435 16:03:29 -- app/version.sh@17 -- # get_header_version major 00:10:59.435 16:03:29 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:59.435 16:03:29 -- app/version.sh@14 -- # tr -d '"' 00:10:59.435 16:03:29 -- app/version.sh@14 -- # cut -f2 00:10:59.435 16:03:29 -- app/version.sh@17 -- # major=24 00:10:59.435 16:03:29 -- app/version.sh@18 -- # get_header_version minor 00:10:59.435 16:03:29 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:59.435 16:03:29 -- app/version.sh@14 -- # tr -d '"' 00:10:59.435 16:03:29 -- app/version.sh@14 -- # cut -f2 00:10:59.435 16:03:29 -- app/version.sh@18 -- # minor=5 00:10:59.435 16:03:29 -- app/version.sh@19 -- # get_header_version patch 00:10:59.435 16:03:29 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:59.435 16:03:29 -- app/version.sh@14 -- # cut -f2 00:10:59.435 16:03:29 -- app/version.sh@14 -- # tr -d '"' 00:10:59.435 16:03:29 -- app/version.sh@19 -- # patch=0 00:10:59.435 16:03:29 -- app/version.sh@20 -- # get_header_version suffix 00:10:59.435 16:03:29 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:59.435 16:03:29 -- app/version.sh@14 -- # cut -f2 00:10:59.435 16:03:29 -- app/version.sh@14 -- # tr -d '"' 00:10:59.435 16:03:29 -- app/version.sh@20 -- # suffix=-pre 00:10:59.435 16:03:29 -- app/version.sh@22 -- # version=24.5 00:10:59.435 16:03:29 -- app/version.sh@25 -- # (( patch != 0 )) 00:10:59.435 16:03:29 -- app/version.sh@28 -- # version=24.5rc0 00:10:59.435 16:03:29 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:10:59.435 16:03:29 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:10:59.694 16:03:29 -- app/version.sh@30 -- # py_version=24.5rc0 00:10:59.694 16:03:29 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:10:59.694 ************************************ 00:10:59.694 END TEST version 00:10:59.694 ************************************ 00:10:59.694 00:10:59.694 real 0m0.189s 00:10:59.694 user 0m0.092s 00:10:59.694 sys 0m0.131s 00:10:59.694 16:03:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:59.694 16:03:29 -- common/autotest_common.sh@10 -- # set +x 00:10:59.694 16:03:29 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:10:59.694 16:03:29 -- spdk/autotest.sh@194 -- # uname -s 00:10:59.694 16:03:29 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:10:59.694 16:03:29 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:10:59.694 16:03:29 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:10:59.694 16:03:29 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:10:59.694 16:03:29 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:10:59.694 16:03:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:59.694 16:03:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:59.694 16:03:29 -- common/autotest_common.sh@10 -- # set +x 00:10:59.694 ************************************ 00:10:59.694 START TEST spdk_dd 00:10:59.694 ************************************ 00:10:59.694 16:03:29 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:10:59.953 * Looking for test storage... 00:10:59.953 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:10:59.953 16:03:29 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:59.953 16:03:29 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:59.953 16:03:29 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:59.953 16:03:29 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:59.953 16:03:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.953 16:03:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.953 16:03:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.953 16:03:29 -- paths/export.sh@5 -- # export PATH 00:10:59.953 16:03:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.953 16:03:29 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:00.211 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:00.211 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:00.211 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:00.211 16:03:30 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:11:00.211 16:03:30 -- dd/dd.sh@11 -- # nvme_in_userspace 00:11:00.211 16:03:30 -- scripts/common.sh@309 -- # local bdf bdfs 00:11:00.211 16:03:30 -- scripts/common.sh@310 -- # local nvmes 00:11:00.211 16:03:30 -- scripts/common.sh@312 -- # [[ -n '' ]] 00:11:00.211 16:03:30 -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:11:00.211 16:03:30 -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:11:00.211 16:03:30 -- scripts/common.sh@295 -- # local bdf= 00:11:00.211 16:03:30 -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:11:00.212 16:03:30 -- scripts/common.sh@230 -- # local class 00:11:00.212 16:03:30 -- scripts/common.sh@231 -- # local subclass 00:11:00.212 16:03:30 -- scripts/common.sh@232 -- # local progif 00:11:00.212 16:03:30 -- scripts/common.sh@233 -- # printf %02x 1 00:11:00.212 16:03:30 -- scripts/common.sh@233 -- # class=01 00:11:00.212 16:03:30 -- scripts/common.sh@234 -- # printf %02x 8 00:11:00.212 16:03:30 -- scripts/common.sh@234 -- # subclass=08 00:11:00.212 16:03:30 -- scripts/common.sh@235 -- # printf %02x 2 00:11:00.212 16:03:30 -- scripts/common.sh@235 -- # progif=02 00:11:00.212 16:03:30 -- scripts/common.sh@237 -- # hash lspci 00:11:00.212 16:03:30 -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:11:00.212 16:03:30 -- scripts/common.sh@239 -- # lspci -mm -n -D 00:11:00.212 16:03:30 -- scripts/common.sh@240 -- # grep -i -- -p02 00:11:00.212 16:03:30 -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:11:00.212 16:03:30 -- scripts/common.sh@242 -- # tr -d '"' 00:11:00.212 16:03:30 -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:00.212 16:03:30 -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:11:00.212 16:03:30 -- scripts/common.sh@15 -- # local i 00:11:00.212 16:03:30 -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:11:00.212 16:03:30 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:00.212 16:03:30 -- scripts/common.sh@24 -- # return 0 00:11:00.212 16:03:30 -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:11:00.212 16:03:30 -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:00.212 16:03:30 -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:11:00.212 16:03:30 -- scripts/common.sh@15 -- # local i 00:11:00.212 16:03:30 -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:11:00.212 16:03:30 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:00.212 16:03:30 -- scripts/common.sh@24 -- # return 0 00:11:00.212 16:03:30 -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:11:00.212 16:03:30 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:11:00.212 16:03:30 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:11:00.212 16:03:30 -- scripts/common.sh@320 -- # uname -s 00:11:00.212 16:03:30 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:11:00.212 16:03:30 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:11:00.212 16:03:30 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:11:00.212 16:03:30 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:11:00.212 16:03:30 -- scripts/common.sh@320 -- # uname -s 00:11:00.212 16:03:30 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:11:00.212 16:03:30 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:11:00.212 16:03:30 -- scripts/common.sh@325 -- # (( 2 )) 00:11:00.212 16:03:30 -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:11:00.212 16:03:30 -- dd/dd.sh@13 -- # check_liburing 00:11:00.212 16:03:30 -- dd/common.sh@139 -- # local lib so 00:11:00.212 16:03:30 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:11:00.212 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.212 16:03:30 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:11:00.212 16:03:30 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ libspdk_nvme.so.13.0 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ libspdk_rdma.so.6.0 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.0 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.14.0 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.1.0 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ libspdk_event.so.13.0 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ libspdk_bdev.so.15.0 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ libspdk_accel.so.15.0 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ libspdk_dma.so.4.0 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ libspdk_sock.so.9.0 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ libspdk_init.so.5.0 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.0 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ libspdk_trace.so.10.0 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ libspdk_keyring.so.1.0 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ libspdk_util.so.9.0 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ librte_bus_pci.so.23 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ librte_cryptodev.so.23 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ librte_dmadev.so.23 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ librte_eal.so.23 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ librte_ethdev.so.23 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ librte_hash.so.23 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ librte_kvargs.so.23 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ librte_mbuf.so.23 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ librte_mempool.so.23 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.23 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ librte_net.so.23 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ librte_pci.so.23 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ librte_power.so.23 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ librte_rcu.so.23 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ librte_ring.so.23 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ librte_telemetry.so.23 == liburing.so.* ]] 00:11:00.472 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.472 16:03:30 -- dd/common.sh@143 -- # [[ librte_vhost.so.23 == liburing.so.* ]] 00:11:00.473 16:03:30 -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:00.473 16:03:30 -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:11:00.473 16:03:30 -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:11:00.473 * spdk_dd linked to liburing 00:11:00.473 16:03:30 -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:11:00.473 16:03:30 -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:11:00.473 16:03:30 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:00.473 16:03:30 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:00.473 16:03:30 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:00.473 16:03:30 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:00.473 16:03:30 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:11:00.473 16:03:30 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:00.473 16:03:30 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:00.473 16:03:30 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:00.473 16:03:30 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:00.473 16:03:30 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:00.473 16:03:30 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:00.473 16:03:30 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:00.473 16:03:30 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:00.473 16:03:30 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:00.473 16:03:30 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:00.473 16:03:30 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:00.473 16:03:30 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:11:00.473 16:03:30 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:00.473 16:03:30 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:11:00.473 16:03:30 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:11:00.473 16:03:30 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:11:00.473 16:03:30 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:11:00.473 16:03:30 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:00.473 16:03:30 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:11:00.473 16:03:30 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:11:00.473 16:03:30 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:00.473 16:03:30 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:00.473 16:03:30 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:11:00.473 16:03:30 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:11:00.473 16:03:30 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:11:00.473 16:03:30 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:11:00.473 16:03:30 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:11:00.473 16:03:30 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:11:00.473 16:03:30 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:11:00.473 16:03:30 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:11:00.473 16:03:30 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:11:00.473 16:03:30 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:11:00.473 16:03:30 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:11:00.473 16:03:30 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:11:00.473 16:03:30 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:11:00.473 16:03:30 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:11:00.473 16:03:30 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:11:00.473 16:03:30 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:11:00.473 16:03:30 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:00.473 16:03:30 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:11:00.473 16:03:30 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:11:00.473 16:03:30 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:11:00.473 16:03:30 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:00.473 16:03:30 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:11:00.473 16:03:30 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:11:00.473 16:03:30 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:11:00.473 16:03:30 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:11:00.473 16:03:30 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:11:00.473 16:03:30 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=y 00:11:00.473 16:03:30 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:11:00.473 16:03:30 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:11:00.473 16:03:30 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:11:00.473 16:03:30 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:11:00.473 16:03:30 -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:11:00.473 16:03:30 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:11:00.473 16:03:30 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:11:00.473 16:03:30 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:11:00.473 16:03:30 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:11:00.473 16:03:30 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:11:00.473 16:03:30 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:11:00.473 16:03:30 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:11:00.473 16:03:30 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:11:00.473 16:03:30 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:00.473 16:03:30 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:11:00.473 16:03:30 -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:11:00.473 16:03:30 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:11:00.473 16:03:30 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:11:00.473 16:03:30 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:11:00.473 16:03:30 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:11:00.473 16:03:30 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:11:00.473 16:03:30 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:11:00.473 16:03:30 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:11:00.473 16:03:30 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:11:00.473 16:03:30 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:11:00.473 16:03:30 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:00.473 16:03:30 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:11:00.473 16:03:30 -- common/build_config.sh@82 -- # CONFIG_URING=y 00:11:00.473 16:03:30 -- dd/common.sh@149 -- # [[ y != y ]] 00:11:00.473 16:03:30 -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:11:00.473 16:03:30 -- dd/common.sh@156 -- # export liburing_in_use=1 00:11:00.473 16:03:30 -- dd/common.sh@156 -- # liburing_in_use=1 00:11:00.473 16:03:30 -- dd/common.sh@157 -- # return 0 00:11:00.473 16:03:30 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:11:00.473 16:03:30 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:11:00.473 16:03:30 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:00.473 16:03:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:00.473 16:03:30 -- common/autotest_common.sh@10 -- # set +x 00:11:00.473 ************************************ 00:11:00.473 START TEST spdk_dd_basic_rw 00:11:00.473 ************************************ 00:11:00.473 16:03:30 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:11:00.473 * Looking for test storage... 00:11:00.473 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:11:00.473 16:03:30 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:00.473 16:03:30 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:00.473 16:03:30 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:00.473 16:03:30 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:00.473 16:03:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.473 16:03:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.473 16:03:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.473 16:03:30 -- paths/export.sh@5 -- # export PATH 00:11:00.473 16:03:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.473 16:03:30 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:11:00.473 16:03:30 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:11:00.473 16:03:30 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:11:00.473 16:03:30 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:11:00.473 16:03:30 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:11:00.473 16:03:30 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:11:00.473 16:03:30 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:11:00.473 16:03:30 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:00.473 16:03:30 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:00.734 16:03:30 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:11:00.734 16:03:30 -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:11:00.734 16:03:30 -- dd/common.sh@126 -- # mapfile -t id 00:11:00.734 16:03:30 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:11:00.734 16:03:30 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:11:00.734 16:03:30 -- dd/common.sh@130 -- # lbaf=04 00:11:00.735 16:03:30 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:11:00.735 16:03:30 -- dd/common.sh@132 -- # lbaf=4096 00:11:00.735 16:03:30 -- dd/common.sh@134 -- # echo 4096 00:11:00.735 16:03:30 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:11:00.735 16:03:30 -- dd/basic_rw.sh@96 -- # : 00:11:00.735 16:03:30 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:11:00.735 16:03:30 -- dd/basic_rw.sh@96 -- # gen_conf 00:11:00.735 16:03:30 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:11:00.735 16:03:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:00.735 16:03:30 -- dd/common.sh@31 -- # xtrace_disable 00:11:00.735 16:03:30 -- common/autotest_common.sh@10 -- # set +x 00:11:00.735 16:03:30 -- common/autotest_common.sh@10 -- # set +x 00:11:00.994 { 00:11:00.994 "subsystems": [ 00:11:00.994 { 00:11:00.994 "subsystem": "bdev", 00:11:00.994 "config": [ 00:11:00.994 { 00:11:00.994 "params": { 00:11:00.994 "trtype": "pcie", 00:11:00.994 "traddr": "0000:00:10.0", 00:11:00.994 "name": "Nvme0" 00:11:00.994 }, 00:11:00.994 "method": "bdev_nvme_attach_controller" 00:11:00.994 }, 00:11:00.994 { 00:11:00.994 "method": "bdev_wait_for_examine" 00:11:00.994 } 00:11:00.994 ] 00:11:00.994 } 00:11:00.994 ] 00:11:00.994 } 00:11:00.994 ************************************ 00:11:00.994 START TEST dd_bs_lt_native_bs 00:11:00.994 ************************************ 00:11:00.994 16:03:30 -- common/autotest_common.sh@1111 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:11:00.994 16:03:30 -- common/autotest_common.sh@638 -- # local es=0 00:11:00.994 16:03:30 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:11:00.994 16:03:30 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:00.994 16:03:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:00.994 16:03:30 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:00.994 16:03:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:00.994 16:03:30 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:00.994 16:03:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:00.994 16:03:30 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:00.994 16:03:30 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:00.994 16:03:30 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:11:00.994 [2024-04-15 16:03:30.784546] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:00.994 [2024-04-15 16:03:30.785164] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74938 ] 00:11:00.994 [2024-04-15 16:03:30.929439] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.252 [2024-04-15 16:03:30.985164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.252 [2024-04-15 16:03:30.986158] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:01.252 [2024-04-15 16:03:31.129478] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:11:01.252 [2024-04-15 16:03:31.129817] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:01.510 [2024-04-15 16:03:31.230325] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:01.510 16:03:31 -- common/autotest_common.sh@641 -- # es=234 00:11:01.510 16:03:31 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:01.510 16:03:31 -- common/autotest_common.sh@650 -- # es=106 00:11:01.510 16:03:31 -- common/autotest_common.sh@651 -- # case "$es" in 00:11:01.510 16:03:31 -- common/autotest_common.sh@658 -- # es=1 00:11:01.510 ************************************ 00:11:01.510 END TEST dd_bs_lt_native_bs 00:11:01.510 ************************************ 00:11:01.510 16:03:31 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:01.510 00:11:01.510 real 0m0.583s 00:11:01.510 user 0m0.324s 00:11:01.510 sys 0m0.151s 00:11:01.510 16:03:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:01.510 16:03:31 -- common/autotest_common.sh@10 -- # set +x 00:11:01.510 16:03:31 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:11:01.510 16:03:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:01.510 16:03:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:01.510 16:03:31 -- common/autotest_common.sh@10 -- # set +x 00:11:01.510 ************************************ 00:11:01.510 START TEST dd_rw 00:11:01.510 ************************************ 00:11:01.510 16:03:31 -- common/autotest_common.sh@1111 -- # basic_rw 4096 00:11:01.510 16:03:31 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:11:01.510 16:03:31 -- dd/basic_rw.sh@12 -- # local count size 00:11:01.510 16:03:31 -- dd/basic_rw.sh@13 -- # local qds bss 00:11:01.510 16:03:31 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:11:01.510 16:03:31 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:11:01.510 16:03:31 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:11:01.510 16:03:31 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:11:01.510 16:03:31 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:11:01.510 16:03:31 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:11:01.510 16:03:31 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:11:01.510 16:03:31 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:11:01.510 16:03:31 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:11:01.510 16:03:31 -- dd/basic_rw.sh@23 -- # count=15 00:11:01.510 16:03:31 -- dd/basic_rw.sh@24 -- # count=15 00:11:01.510 16:03:31 -- dd/basic_rw.sh@25 -- # size=61440 00:11:01.510 16:03:31 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:11:01.510 16:03:31 -- dd/common.sh@98 -- # xtrace_disable 00:11:01.510 16:03:31 -- common/autotest_common.sh@10 -- # set +x 00:11:02.446 16:03:32 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:11:02.446 16:03:32 -- dd/basic_rw.sh@30 -- # gen_conf 00:11:02.446 16:03:32 -- dd/common.sh@31 -- # xtrace_disable 00:11:02.446 16:03:32 -- common/autotest_common.sh@10 -- # set +x 00:11:02.446 { 00:11:02.446 "subsystems": [ 00:11:02.446 { 00:11:02.446 "subsystem": "bdev", 00:11:02.446 "config": [ 00:11:02.446 { 00:11:02.446 "params": { 00:11:02.446 "trtype": "pcie", 00:11:02.446 "traddr": "0000:00:10.0", 00:11:02.446 "name": "Nvme0" 00:11:02.446 }, 00:11:02.446 "method": "bdev_nvme_attach_controller" 00:11:02.446 }, 00:11:02.446 { 00:11:02.446 "method": "bdev_wait_for_examine" 00:11:02.446 } 00:11:02.446 ] 00:11:02.446 } 00:11:02.446 ] 00:11:02.446 } 00:11:02.446 [2024-04-15 16:03:32.209075] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:02.446 [2024-04-15 16:03:32.209414] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74984 ] 00:11:02.446 [2024-04-15 16:03:32.357063] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.446 [2024-04-15 16:03:32.406175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.446 [2024-04-15 16:03:32.407064] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:02.962  Copying: 60/60 [kB] (average 29 MBps) 00:11:02.962 00:11:02.962 16:03:32 -- dd/basic_rw.sh@37 -- # gen_conf 00:11:02.962 16:03:32 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:11:02.962 16:03:32 -- dd/common.sh@31 -- # xtrace_disable 00:11:02.962 16:03:32 -- common/autotest_common.sh@10 -- # set +x 00:11:02.962 { 00:11:02.962 "subsystems": [ 00:11:02.962 { 00:11:02.962 "subsystem": "bdev", 00:11:02.962 "config": [ 00:11:02.962 { 00:11:02.963 "params": { 00:11:02.963 "trtype": "pcie", 00:11:02.963 "traddr": "0000:00:10.0", 00:11:02.963 "name": "Nvme0" 00:11:02.963 }, 00:11:02.963 "method": "bdev_nvme_attach_controller" 00:11:02.963 }, 00:11:02.963 { 00:11:02.963 "method": "bdev_wait_for_examine" 00:11:02.963 } 00:11:02.963 ] 00:11:02.963 } 00:11:02.963 ] 00:11:02.963 } 00:11:02.963 [2024-04-15 16:03:32.775234] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:02.963 [2024-04-15 16:03:32.775501] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74992 ] 00:11:02.963 [2024-04-15 16:03:32.922744] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.221 [2024-04-15 16:03:32.981441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.221 [2024-04-15 16:03:32.982548] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:03.511  Copying: 60/60 [kB] (average 29 MBps) 00:11:03.511 00:11:03.511 16:03:33 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:03.511 16:03:33 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:11:03.511 16:03:33 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:11:03.511 16:03:33 -- dd/common.sh@11 -- # local nvme_ref= 00:11:03.511 16:03:33 -- dd/common.sh@12 -- # local size=61440 00:11:03.511 16:03:33 -- dd/common.sh@14 -- # local bs=1048576 00:11:03.511 16:03:33 -- dd/common.sh@15 -- # local count=1 00:11:03.511 16:03:33 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:11:03.511 16:03:33 -- dd/common.sh@18 -- # gen_conf 00:11:03.511 16:03:33 -- dd/common.sh@31 -- # xtrace_disable 00:11:03.511 16:03:33 -- common/autotest_common.sh@10 -- # set +x 00:11:03.511 [2024-04-15 16:03:33.364268] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:03.511 [2024-04-15 16:03:33.365056] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75013 ] 00:11:03.511 { 00:11:03.511 "subsystems": [ 00:11:03.511 { 00:11:03.511 "subsystem": "bdev", 00:11:03.511 "config": [ 00:11:03.511 { 00:11:03.511 "params": { 00:11:03.511 "trtype": "pcie", 00:11:03.511 "traddr": "0000:00:10.0", 00:11:03.511 "name": "Nvme0" 00:11:03.511 }, 00:11:03.511 "method": "bdev_nvme_attach_controller" 00:11:03.511 }, 00:11:03.511 { 00:11:03.511 "method": "bdev_wait_for_examine" 00:11:03.511 } 00:11:03.511 ] 00:11:03.511 } 00:11:03.511 ] 00:11:03.511 } 00:11:03.768 [2024-04-15 16:03:33.504955] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.768 [2024-04-15 16:03:33.564910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.768 [2024-04-15 16:03:33.565784] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:04.026  Copying: 1024/1024 [kB] (average 1000 MBps) 00:11:04.026 00:11:04.026 16:03:33 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:11:04.026 16:03:33 -- dd/basic_rw.sh@23 -- # count=15 00:11:04.026 16:03:33 -- dd/basic_rw.sh@24 -- # count=15 00:11:04.026 16:03:33 -- dd/basic_rw.sh@25 -- # size=61440 00:11:04.026 16:03:33 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:11:04.026 16:03:33 -- dd/common.sh@98 -- # xtrace_disable 00:11:04.026 16:03:33 -- common/autotest_common.sh@10 -- # set +x 00:11:04.594 16:03:34 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:11:04.594 16:03:34 -- dd/basic_rw.sh@30 -- # gen_conf 00:11:04.594 16:03:34 -- dd/common.sh@31 -- # xtrace_disable 00:11:04.594 16:03:34 -- common/autotest_common.sh@10 -- # set +x 00:11:04.594 [2024-04-15 16:03:34.522518] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:04.594 [2024-04-15 16:03:34.522874] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75032 ] 00:11:04.594 { 00:11:04.594 "subsystems": [ 00:11:04.594 { 00:11:04.594 "subsystem": "bdev", 00:11:04.594 "config": [ 00:11:04.594 { 00:11:04.594 "params": { 00:11:04.594 "trtype": "pcie", 00:11:04.594 "traddr": "0000:00:10.0", 00:11:04.594 "name": "Nvme0" 00:11:04.594 }, 00:11:04.594 "method": "bdev_nvme_attach_controller" 00:11:04.594 }, 00:11:04.594 { 00:11:04.594 "method": "bdev_wait_for_examine" 00:11:04.594 } 00:11:04.594 ] 00:11:04.594 } 00:11:04.594 ] 00:11:04.594 } 00:11:04.853 [2024-04-15 16:03:34.659780] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.853 [2024-04-15 16:03:34.712044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.853 [2024-04-15 16:03:34.712970] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:05.112  Copying: 60/60 [kB] (average 58 MBps) 00:11:05.112 00:11:05.112 16:03:35 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:11:05.112 16:03:35 -- dd/basic_rw.sh@37 -- # gen_conf 00:11:05.112 16:03:35 -- dd/common.sh@31 -- # xtrace_disable 00:11:05.112 16:03:35 -- common/autotest_common.sh@10 -- # set +x 00:11:05.370 { 00:11:05.370 "subsystems": [ 00:11:05.370 { 00:11:05.370 "subsystem": "bdev", 00:11:05.370 "config": [ 00:11:05.370 { 00:11:05.370 "params": { 00:11:05.370 "trtype": "pcie", 00:11:05.370 "traddr": "0000:00:10.0", 00:11:05.370 "name": "Nvme0" 00:11:05.370 }, 00:11:05.370 "method": "bdev_nvme_attach_controller" 00:11:05.370 }, 00:11:05.370 { 00:11:05.370 "method": "bdev_wait_for_examine" 00:11:05.370 } 00:11:05.370 ] 00:11:05.370 } 00:11:05.370 ] 00:11:05.370 } 00:11:05.370 [2024-04-15 16:03:35.090417] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:05.371 [2024-04-15 16:03:35.091766] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75041 ] 00:11:05.371 [2024-04-15 16:03:35.234209] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.371 [2024-04-15 16:03:35.285610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.371 [2024-04-15 16:03:35.286690] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:05.889  Copying: 60/60 [kB] (average 29 MBps) 00:11:05.889 00:11:05.889 16:03:35 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:05.889 16:03:35 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:11:05.889 16:03:35 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:11:05.889 16:03:35 -- dd/common.sh@11 -- # local nvme_ref= 00:11:05.889 16:03:35 -- dd/common.sh@12 -- # local size=61440 00:11:05.889 16:03:35 -- dd/common.sh@14 -- # local bs=1048576 00:11:05.889 16:03:35 -- dd/common.sh@15 -- # local count=1 00:11:05.889 16:03:35 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:11:05.889 16:03:35 -- dd/common.sh@18 -- # gen_conf 00:11:05.889 16:03:35 -- dd/common.sh@31 -- # xtrace_disable 00:11:05.889 16:03:35 -- common/autotest_common.sh@10 -- # set +x 00:11:05.889 { 00:11:05.889 "subsystems": [ 00:11:05.889 { 00:11:05.889 "subsystem": "bdev", 00:11:05.889 "config": [ 00:11:05.889 { 00:11:05.889 "params": { 00:11:05.889 "trtype": "pcie", 00:11:05.889 "traddr": "0000:00:10.0", 00:11:05.889 "name": "Nvme0" 00:11:05.889 }, 00:11:05.889 "method": "bdev_nvme_attach_controller" 00:11:05.889 }, 00:11:05.889 { 00:11:05.889 "method": "bdev_wait_for_examine" 00:11:05.889 } 00:11:05.889 ] 00:11:05.889 } 00:11:05.889 ] 00:11:05.889 } 00:11:05.889 [2024-04-15 16:03:35.673395] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:05.889 [2024-04-15 16:03:35.673663] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75061 ] 00:11:05.889 [2024-04-15 16:03:35.818406] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.147 [2024-04-15 16:03:35.871250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.147 [2024-04-15 16:03:35.872159] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:06.406  Copying: 1024/1024 [kB] (average 1000 MBps) 00:11:06.406 00:11:06.406 16:03:36 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:11:06.406 16:03:36 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:11:06.406 16:03:36 -- dd/basic_rw.sh@23 -- # count=7 00:11:06.406 16:03:36 -- dd/basic_rw.sh@24 -- # count=7 00:11:06.406 16:03:36 -- dd/basic_rw.sh@25 -- # size=57344 00:11:06.406 16:03:36 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:11:06.406 16:03:36 -- dd/common.sh@98 -- # xtrace_disable 00:11:06.406 16:03:36 -- common/autotest_common.sh@10 -- # set +x 00:11:06.973 16:03:36 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:11:06.973 16:03:36 -- dd/basic_rw.sh@30 -- # gen_conf 00:11:06.973 16:03:36 -- dd/common.sh@31 -- # xtrace_disable 00:11:06.973 16:03:36 -- common/autotest_common.sh@10 -- # set +x 00:11:06.973 { 00:11:06.973 "subsystems": [ 00:11:06.973 { 00:11:06.973 "subsystem": "bdev", 00:11:06.973 "config": [ 00:11:06.973 { 00:11:06.973 "params": { 00:11:06.973 "trtype": "pcie", 00:11:06.973 "traddr": "0000:00:10.0", 00:11:06.973 "name": "Nvme0" 00:11:06.973 }, 00:11:06.973 "method": "bdev_nvme_attach_controller" 00:11:06.973 }, 00:11:06.973 { 00:11:06.973 "method": "bdev_wait_for_examine" 00:11:06.973 } 00:11:06.973 ] 00:11:06.973 } 00:11:06.973 ] 00:11:06.973 } 00:11:06.973 [2024-04-15 16:03:36.889911] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:06.973 [2024-04-15 16:03:36.890050] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75080 ] 00:11:07.232 [2024-04-15 16:03:37.037392] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.232 [2024-04-15 16:03:37.096745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.232 [2024-04-15 16:03:37.097534] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:07.490  Copying: 56/56 [kB] (average 27 MBps) 00:11:07.490 00:11:07.490 16:03:37 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:11:07.490 16:03:37 -- dd/basic_rw.sh@37 -- # gen_conf 00:11:07.490 16:03:37 -- dd/common.sh@31 -- # xtrace_disable 00:11:07.490 16:03:37 -- common/autotest_common.sh@10 -- # set +x 00:11:07.748 [2024-04-15 16:03:37.477995] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:07.748 [2024-04-15 16:03:37.478566] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75099 ] 00:11:07.748 { 00:11:07.748 "subsystems": [ 00:11:07.748 { 00:11:07.748 "subsystem": "bdev", 00:11:07.748 "config": [ 00:11:07.748 { 00:11:07.748 "params": { 00:11:07.748 "trtype": "pcie", 00:11:07.748 "traddr": "0000:00:10.0", 00:11:07.748 "name": "Nvme0" 00:11:07.748 }, 00:11:07.748 "method": "bdev_nvme_attach_controller" 00:11:07.748 }, 00:11:07.748 { 00:11:07.748 "method": "bdev_wait_for_examine" 00:11:07.748 } 00:11:07.748 ] 00:11:07.748 } 00:11:07.748 ] 00:11:07.748 } 00:11:07.748 [2024-04-15 16:03:37.620842] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.748 [2024-04-15 16:03:37.678911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.748 [2024-04-15 16:03:37.679750] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:08.264  Copying: 56/56 [kB] (average 27 MBps) 00:11:08.264 00:11:08.264 16:03:38 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:08.264 16:03:38 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:11:08.264 16:03:38 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:11:08.264 16:03:38 -- dd/common.sh@11 -- # local nvme_ref= 00:11:08.264 16:03:38 -- dd/common.sh@12 -- # local size=57344 00:11:08.264 16:03:38 -- dd/common.sh@14 -- # local bs=1048576 00:11:08.264 16:03:38 -- dd/common.sh@15 -- # local count=1 00:11:08.264 16:03:38 -- dd/common.sh@18 -- # gen_conf 00:11:08.264 16:03:38 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:11:08.264 16:03:38 -- dd/common.sh@31 -- # xtrace_disable 00:11:08.264 16:03:38 -- common/autotest_common.sh@10 -- # set +x 00:11:08.264 { 00:11:08.264 "subsystems": [ 00:11:08.264 { 00:11:08.264 "subsystem": "bdev", 00:11:08.264 "config": [ 00:11:08.264 { 00:11:08.264 "params": { 00:11:08.264 "trtype": "pcie", 00:11:08.264 "traddr": "0000:00:10.0", 00:11:08.264 "name": "Nvme0" 00:11:08.264 }, 00:11:08.264 "method": "bdev_nvme_attach_controller" 00:11:08.264 }, 00:11:08.264 { 00:11:08.264 "method": "bdev_wait_for_examine" 00:11:08.264 } 00:11:08.264 ] 00:11:08.264 } 00:11:08.264 ] 00:11:08.264 } 00:11:08.264 [2024-04-15 16:03:38.075618] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:08.265 [2024-04-15 16:03:38.075713] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75109 ] 00:11:08.265 [2024-04-15 16:03:38.219142] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.524 [2024-04-15 16:03:38.271367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.524 [2024-04-15 16:03:38.272240] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:08.782  Copying: 1024/1024 [kB] (average 1000 MBps) 00:11:08.782 00:11:08.782 16:03:38 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:11:08.782 16:03:38 -- dd/basic_rw.sh@23 -- # count=7 00:11:08.782 16:03:38 -- dd/basic_rw.sh@24 -- # count=7 00:11:08.782 16:03:38 -- dd/basic_rw.sh@25 -- # size=57344 00:11:08.782 16:03:38 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:11:08.782 16:03:38 -- dd/common.sh@98 -- # xtrace_disable 00:11:08.782 16:03:38 -- common/autotest_common.sh@10 -- # set +x 00:11:09.399 16:03:39 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:11:09.399 16:03:39 -- dd/basic_rw.sh@30 -- # gen_conf 00:11:09.399 16:03:39 -- dd/common.sh@31 -- # xtrace_disable 00:11:09.399 16:03:39 -- common/autotest_common.sh@10 -- # set +x 00:11:09.399 [2024-04-15 16:03:39.241048] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:09.399 [2024-04-15 16:03:39.241130] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75134 ] 00:11:09.399 { 00:11:09.399 "subsystems": [ 00:11:09.399 { 00:11:09.399 "subsystem": "bdev", 00:11:09.399 "config": [ 00:11:09.399 { 00:11:09.399 "params": { 00:11:09.399 "trtype": "pcie", 00:11:09.399 "traddr": "0000:00:10.0", 00:11:09.399 "name": "Nvme0" 00:11:09.399 }, 00:11:09.399 "method": "bdev_nvme_attach_controller" 00:11:09.399 }, 00:11:09.399 { 00:11:09.399 "method": "bdev_wait_for_examine" 00:11:09.399 } 00:11:09.399 ] 00:11:09.399 } 00:11:09.399 ] 00:11:09.399 } 00:11:09.660 [2024-04-15 16:03:39.379068] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.660 [2024-04-15 16:03:39.462023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.660 [2024-04-15 16:03:39.463080] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:09.918  Copying: 56/56 [kB] (average 54 MBps) 00:11:09.918 00:11:09.918 16:03:39 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:11:09.918 16:03:39 -- dd/basic_rw.sh@37 -- # gen_conf 00:11:09.918 16:03:39 -- dd/common.sh@31 -- # xtrace_disable 00:11:09.918 16:03:39 -- common/autotest_common.sh@10 -- # set +x 00:11:09.918 [2024-04-15 16:03:39.849289] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:09.918 [2024-04-15 16:03:39.849394] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75147 ] 00:11:09.918 { 00:11:09.918 "subsystems": [ 00:11:09.918 { 00:11:09.918 "subsystem": "bdev", 00:11:09.918 "config": [ 00:11:09.918 { 00:11:09.918 "params": { 00:11:09.918 "trtype": "pcie", 00:11:09.918 "traddr": "0000:00:10.0", 00:11:09.918 "name": "Nvme0" 00:11:09.918 }, 00:11:09.918 "method": "bdev_nvme_attach_controller" 00:11:09.918 }, 00:11:09.918 { 00:11:09.918 "method": "bdev_wait_for_examine" 00:11:09.918 } 00:11:09.918 ] 00:11:09.918 } 00:11:09.918 ] 00:11:09.918 } 00:11:10.177 [2024-04-15 16:03:39.994850] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.177 [2024-04-15 16:03:40.045779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.177 [2024-04-15 16:03:40.046491] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:10.436  Copying: 56/56 [kB] (average 54 MBps) 00:11:10.436 00:11:10.436 16:03:40 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:10.436 16:03:40 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:11:10.436 16:03:40 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:11:10.436 16:03:40 -- dd/common.sh@11 -- # local nvme_ref= 00:11:10.436 16:03:40 -- dd/common.sh@12 -- # local size=57344 00:11:10.436 16:03:40 -- dd/common.sh@14 -- # local bs=1048576 00:11:10.436 16:03:40 -- dd/common.sh@15 -- # local count=1 00:11:10.436 16:03:40 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:11:10.436 16:03:40 -- dd/common.sh@18 -- # gen_conf 00:11:10.436 16:03:40 -- dd/common.sh@31 -- # xtrace_disable 00:11:10.436 16:03:40 -- common/autotest_common.sh@10 -- # set +x 00:11:10.694 { 00:11:10.694 "subsystems": [ 00:11:10.694 { 00:11:10.694 "subsystem": "bdev", 00:11:10.694 "config": [ 00:11:10.694 { 00:11:10.694 "params": { 00:11:10.694 "trtype": "pcie", 00:11:10.694 "traddr": "0000:00:10.0", 00:11:10.694 "name": "Nvme0" 00:11:10.694 }, 00:11:10.694 "method": "bdev_nvme_attach_controller" 00:11:10.694 }, 00:11:10.694 { 00:11:10.694 "method": "bdev_wait_for_examine" 00:11:10.694 } 00:11:10.694 ] 00:11:10.694 } 00:11:10.694 ] 00:11:10.694 } 00:11:10.695 [2024-04-15 16:03:40.421058] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:10.695 [2024-04-15 16:03:40.421161] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75163 ] 00:11:10.695 [2024-04-15 16:03:40.564385] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.695 [2024-04-15 16:03:40.609681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.695 [2024-04-15 16:03:40.610395] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:11.213  Copying: 1024/1024 [kB] (average 500 MBps) 00:11:11.213 00:11:11.213 16:03:40 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:11:11.213 16:03:40 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:11:11.213 16:03:40 -- dd/basic_rw.sh@23 -- # count=3 00:11:11.213 16:03:40 -- dd/basic_rw.sh@24 -- # count=3 00:11:11.213 16:03:40 -- dd/basic_rw.sh@25 -- # size=49152 00:11:11.213 16:03:40 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:11:11.213 16:03:40 -- dd/common.sh@98 -- # xtrace_disable 00:11:11.213 16:03:40 -- common/autotest_common.sh@10 -- # set +x 00:11:11.780 16:03:41 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:11:11.780 16:03:41 -- dd/basic_rw.sh@30 -- # gen_conf 00:11:11.780 16:03:41 -- dd/common.sh@31 -- # xtrace_disable 00:11:11.780 16:03:41 -- common/autotest_common.sh@10 -- # set +x 00:11:11.780 [2024-04-15 16:03:41.491496] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:11.780 [2024-04-15 16:03:41.491611] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75182 ] 00:11:11.780 { 00:11:11.780 "subsystems": [ 00:11:11.780 { 00:11:11.780 "subsystem": "bdev", 00:11:11.780 "config": [ 00:11:11.780 { 00:11:11.780 "params": { 00:11:11.780 "trtype": "pcie", 00:11:11.780 "traddr": "0000:00:10.0", 00:11:11.780 "name": "Nvme0" 00:11:11.780 }, 00:11:11.780 "method": "bdev_nvme_attach_controller" 00:11:11.780 }, 00:11:11.780 { 00:11:11.780 "method": "bdev_wait_for_examine" 00:11:11.780 } 00:11:11.780 ] 00:11:11.780 } 00:11:11.780 ] 00:11:11.780 } 00:11:11.780 [2024-04-15 16:03:41.635565] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.780 [2024-04-15 16:03:41.683131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.780 [2024-04-15 16:03:41.683858] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:12.039  Copying: 48/48 [kB] (average 46 MBps) 00:11:12.039 00:11:12.039 16:03:41 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:11:12.039 16:03:41 -- dd/basic_rw.sh@37 -- # gen_conf 00:11:12.039 16:03:42 -- dd/common.sh@31 -- # xtrace_disable 00:11:12.039 16:03:42 -- common/autotest_common.sh@10 -- # set +x 00:11:12.298 [2024-04-15 16:03:42.050272] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:12.298 [2024-04-15 16:03:42.050393] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75195 ] 00:11:12.298 { 00:11:12.298 "subsystems": [ 00:11:12.298 { 00:11:12.298 "subsystem": "bdev", 00:11:12.298 "config": [ 00:11:12.298 { 00:11:12.298 "params": { 00:11:12.298 "trtype": "pcie", 00:11:12.298 "traddr": "0000:00:10.0", 00:11:12.298 "name": "Nvme0" 00:11:12.298 }, 00:11:12.298 "method": "bdev_nvme_attach_controller" 00:11:12.298 }, 00:11:12.298 { 00:11:12.298 "method": "bdev_wait_for_examine" 00:11:12.298 } 00:11:12.298 ] 00:11:12.298 } 00:11:12.298 ] 00:11:12.298 } 00:11:12.298 [2024-04-15 16:03:42.194653] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.298 [2024-04-15 16:03:42.244354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.298 [2024-04-15 16:03:42.245031] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:12.869  Copying: 48/48 [kB] (average 46 MBps) 00:11:12.869 00:11:12.869 16:03:42 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:12.869 16:03:42 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:11:12.870 16:03:42 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:11:12.870 16:03:42 -- dd/common.sh@11 -- # local nvme_ref= 00:11:12.870 16:03:42 -- dd/common.sh@12 -- # local size=49152 00:11:12.870 16:03:42 -- dd/common.sh@14 -- # local bs=1048576 00:11:12.870 16:03:42 -- dd/common.sh@15 -- # local count=1 00:11:12.870 16:03:42 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:11:12.870 16:03:42 -- dd/common.sh@18 -- # gen_conf 00:11:12.870 16:03:42 -- dd/common.sh@31 -- # xtrace_disable 00:11:12.870 16:03:42 -- common/autotest_common.sh@10 -- # set +x 00:11:12.870 { 00:11:12.870 "subsystems": [ 00:11:12.870 { 00:11:12.870 "subsystem": "bdev", 00:11:12.870 "config": [ 00:11:12.870 { 00:11:12.870 "params": { 00:11:12.870 "trtype": "pcie", 00:11:12.870 "traddr": "0000:00:10.0", 00:11:12.870 "name": "Nvme0" 00:11:12.870 }, 00:11:12.870 "method": "bdev_nvme_attach_controller" 00:11:12.870 }, 00:11:12.870 { 00:11:12.870 "method": "bdev_wait_for_examine" 00:11:12.870 } 00:11:12.870 ] 00:11:12.870 } 00:11:12.870 ] 00:11:12.870 } 00:11:12.870 [2024-04-15 16:03:42.612075] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:12.870 [2024-04-15 16:03:42.612151] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75211 ] 00:11:12.870 [2024-04-15 16:03:42.743553] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.870 [2024-04-15 16:03:42.794985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.870 [2024-04-15 16:03:42.795878] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:13.387  Copying: 1024/1024 [kB] (average 1000 MBps) 00:11:13.387 00:11:13.387 16:03:43 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:11:13.387 16:03:43 -- dd/basic_rw.sh@23 -- # count=3 00:11:13.387 16:03:43 -- dd/basic_rw.sh@24 -- # count=3 00:11:13.387 16:03:43 -- dd/basic_rw.sh@25 -- # size=49152 00:11:13.387 16:03:43 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:11:13.387 16:03:43 -- dd/common.sh@98 -- # xtrace_disable 00:11:13.387 16:03:43 -- common/autotest_common.sh@10 -- # set +x 00:11:13.953 16:03:43 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:11:13.953 16:03:43 -- dd/basic_rw.sh@30 -- # gen_conf 00:11:13.953 16:03:43 -- dd/common.sh@31 -- # xtrace_disable 00:11:13.953 16:03:43 -- common/autotest_common.sh@10 -- # set +x 00:11:13.953 [2024-04-15 16:03:43.691501] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:13.953 [2024-04-15 16:03:43.693086] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75234 ] 00:11:13.953 { 00:11:13.953 "subsystems": [ 00:11:13.953 { 00:11:13.953 "subsystem": "bdev", 00:11:13.953 "config": [ 00:11:13.953 { 00:11:13.953 "params": { 00:11:13.953 "trtype": "pcie", 00:11:13.953 "traddr": "0000:00:10.0", 00:11:13.953 "name": "Nvme0" 00:11:13.953 }, 00:11:13.953 "method": "bdev_nvme_attach_controller" 00:11:13.953 }, 00:11:13.953 { 00:11:13.953 "method": "bdev_wait_for_examine" 00:11:13.953 } 00:11:13.953 ] 00:11:13.953 } 00:11:13.953 ] 00:11:13.953 } 00:11:13.953 [2024-04-15 16:03:43.841140] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.953 [2024-04-15 16:03:43.908129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.953 [2024-04-15 16:03:43.909151] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:14.468  Copying: 48/48 [kB] (average 46 MBps) 00:11:14.468 00:11:14.468 16:03:44 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:11:14.468 16:03:44 -- dd/basic_rw.sh@37 -- # gen_conf 00:11:14.468 16:03:44 -- dd/common.sh@31 -- # xtrace_disable 00:11:14.468 16:03:44 -- common/autotest_common.sh@10 -- # set +x 00:11:14.468 { 00:11:14.468 "subsystems": [ 00:11:14.468 { 00:11:14.468 "subsystem": "bdev", 00:11:14.468 "config": [ 00:11:14.468 { 00:11:14.468 "params": { 00:11:14.468 "trtype": "pcie", 00:11:14.468 "traddr": "0000:00:10.0", 00:11:14.468 "name": "Nvme0" 00:11:14.468 }, 00:11:14.469 "method": "bdev_nvme_attach_controller" 00:11:14.469 }, 00:11:14.469 { 00:11:14.469 "method": "bdev_wait_for_examine" 00:11:14.469 } 00:11:14.469 ] 00:11:14.469 } 00:11:14.469 ] 00:11:14.469 } 00:11:14.469 [2024-04-15 16:03:44.299392] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:14.469 [2024-04-15 16:03:44.299747] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75243 ] 00:11:14.727 [2024-04-15 16:03:44.449629] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.727 [2024-04-15 16:03:44.505802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.727 [2024-04-15 16:03:44.506851] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:14.985  Copying: 48/48 [kB] (average 46 MBps) 00:11:14.985 00:11:14.985 16:03:44 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:14.985 16:03:44 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:11:14.985 16:03:44 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:11:14.985 16:03:44 -- dd/common.sh@11 -- # local nvme_ref= 00:11:14.985 16:03:44 -- dd/common.sh@12 -- # local size=49152 00:11:14.985 16:03:44 -- dd/common.sh@14 -- # local bs=1048576 00:11:14.985 16:03:44 -- dd/common.sh@15 -- # local count=1 00:11:14.985 16:03:44 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:11:14.985 16:03:44 -- dd/common.sh@18 -- # gen_conf 00:11:14.985 16:03:44 -- dd/common.sh@31 -- # xtrace_disable 00:11:14.985 16:03:44 -- common/autotest_common.sh@10 -- # set +x 00:11:14.985 [2024-04-15 16:03:44.886798] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:14.985 [2024-04-15 16:03:44.887073] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75264 ] 00:11:14.985 { 00:11:14.985 "subsystems": [ 00:11:14.985 { 00:11:14.985 "subsystem": "bdev", 00:11:14.985 "config": [ 00:11:14.985 { 00:11:14.985 "params": { 00:11:14.985 "trtype": "pcie", 00:11:14.985 "traddr": "0000:00:10.0", 00:11:14.985 "name": "Nvme0" 00:11:14.985 }, 00:11:14.985 "method": "bdev_nvme_attach_controller" 00:11:14.985 }, 00:11:14.985 { 00:11:14.985 "method": "bdev_wait_for_examine" 00:11:14.985 } 00:11:14.985 ] 00:11:14.985 } 00:11:14.985 ] 00:11:14.985 } 00:11:15.243 [2024-04-15 16:03:45.026204] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.243 [2024-04-15 16:03:45.076734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.243 [2024-04-15 16:03:45.077457] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:15.501  Copying: 1024/1024 [kB] (average 1000 MBps) 00:11:15.501 00:11:15.501 00:11:15.501 real 0m13.943s 00:11:15.501 user 0m9.715s 00:11:15.501 sys 0m5.215s 00:11:15.501 16:03:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:15.501 16:03:45 -- common/autotest_common.sh@10 -- # set +x 00:11:15.501 ************************************ 00:11:15.501 END TEST dd_rw 00:11:15.501 ************************************ 00:11:15.501 16:03:45 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:11:15.501 16:03:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:15.501 16:03:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:15.501 16:03:45 -- common/autotest_common.sh@10 -- # set +x 00:11:15.760 ************************************ 00:11:15.760 START TEST dd_rw_offset 00:11:15.760 ************************************ 00:11:15.760 16:03:45 -- common/autotest_common.sh@1111 -- # basic_offset 00:11:15.760 16:03:45 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:11:15.760 16:03:45 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:11:15.760 16:03:45 -- dd/common.sh@98 -- # xtrace_disable 00:11:15.760 16:03:45 -- common/autotest_common.sh@10 -- # set +x 00:11:15.760 16:03:45 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:11:15.761 16:03:45 -- dd/basic_rw.sh@56 -- # data=2smcnnoz7vls7qtwcjjmga8bl6uvavc9rie9fb0xqux9retkgo03i7qtyzcrv0d0w6y5059r92wbjfz4hvll4p6m9ffxim8uim57ma8tyeecesk332r787qw2msqszml2vzzgroegkf9q9no3rks2uh06tr9eb7s3sq2xuosu9ra3jjtnkfzsj83o9gjxtvr5yw1qcw7y34bnxlipwf5i1kr0utci030cljk0f6qq2lb6njk18pdbk3myh2516c59siwlrpopvm0jpw17e5p3lksyl4ibilgjsv1s1n6au1iiofw89vnu14delbc6sc1m9qr30g60l7q8ho4wj314ntujb4xihx2iw8o3bxiqjf1oq0qif3fwp4wnpl52j6bmxc6it9ud9z0cz72bhau1elh0dc0vj02qpuhkyxtl93n7a9bvml9h9i57f8na7fvai8r89dhktxzqjs98gd2qp2qxootq0momhmdca3jyu7bbjg9zvvqcosqevn3u2ww1w7tzleua2az86boeez2p36tybmzlyvhi4zii4msdhqciseidzmynp6e7bo0ehqw6jstd9afhss8j6c6jmbofwgdw663v7aibkhvwjlh8unyyd9zrtlqzujpzjwg5l2b7gozvowx4i0tlhcw4m3jrl5ellull4pjzxgc1spv376kmeulk8lilh0dypnnj6t51aarq8xw36cqggaj78ugtv3ko1jc93h9aiq82qj9vgu5cmiec1l9etkjxprtgvlipmia6g34g3fcxtr56x9mspr78ll4ruz8nz3b37x8qyocb981nlxy96mdwi1fk0l8h5uojk5bym5si9uveq95l2hnwkhhwpavs3mxeu7ggez8etu8qh9mqf1barvbx7ltoujs9r3a5yrhvjqf8h3jvxhn5touyyr3q6jnkpm0ehlomw7vrltgzcc1m8j659cfw38cp5yyegcg0oimh8elc8nlwgbqiw3ub3ac4wh41o8mbprxdp0imez9y9gci5hcm10adf9xaqkxc0dwjqtlkf6sonsgd9dxkeqfztnsmoir10h5t80jugxuu1fdj5arl6mn8nxsxoevuptemlaco1ckd1dqh1zq5wi6pty6a9s3m4t6anjj7dpo5hqknsxot964ymsv7u85nsz1n11tmcznh8comxvq0xhzw2dkt38p6ssimqwyskc4lhv4sb0ra6w8d37ke38qohhyonknhvx6telpdik5997yw3941uazoeqnaii2vy0vvwa3vdqiep97iugyna3b6pjl2b9vhy6nmjcuagv2pedi30xwj8aeazb7zkkwx4k2x7pqsu4e9mqel02ll15kp6dac1sg5f1nhcftiqaypu6ug1mxsr3w189u4tngm2bqof5r28aaqnrdq4b56gu4kbxh4qjwckegyc634duz7en5q3nc2fnf71qk61lyfzj73rdzatr5xdtakar5no2henn9t0xlnnxuaz0tuvlc0uneodqevcwpjq3jw3fird6vntop23gx49ie4esgul5caf8femq9japlnhz1jm1yo95jecyad77db22e1gxi911cztme75uymt6gtw6yigrkd2yritnq5881jk2w7k9jebrl4vcodg5po3kl6e46c0a844o8qe6vv8y4qs7b46sb5v5sawo2kie5f9dalgsm70dvitqguh6cxmk248nzamz3ly6n1c6pau2olgfszjhsn2k6duev2rkjmg0ljj6ts01xibgnm85rdt67z535ugv8ysu92jkhpy0u6mkhp597vrg92g1mj497b5twexuzf1vxlrqi8knvyl2eiv8z6u87lku5uzg9kkfnrr92ys5r2jl3aoz6r1vflrp37ytgfurlb3jn19sq1djrahcvxgwiuc8fmm3tmhkv3tj8cd4z21xi0o3ex9u94f5beoad2efy4gvnhhqe53ylq9q6pgvyr5h8hdnbb6nvixtkpqx2d9hld8d9gvcffkaokzl0zubogt4ydxg2vnkbo2q2m952m440c4t88q3i2dsw3qwlxxgb95thveauzdx34ty73mynpkvz37co38sr38t0e767q3jhy56ghd6py9mff1mvixoyvnw4xqgcql1dvgcrb26u53c7e39ihmpzbu9wux70rckihimyqzcze6d35ftziuf1m0mlx1utuvxm6mxzndrvljk1mplhp7x4o9vteqp6s0285w67vyb19ac1zgrfw3locrhofy9m05g43b6gilec85xlf53stt8lpglt8pdxzqsty7ow415dzy8jyq2s9cn2d84l1o4ezktknp62fhjf7rnf9vgddeh06j8sapml6xsfhen7cfxad6grbg02htq3vw8l3wdixixg0yeia70tv8y0gxu962jhsu43jqzyej5mm9i4w0r5c1se7qd27eizsh67mbhr2i7amz6ejn243il560nmg7jq20eifhynad4iupgny3d3dq6w2mca53owpli9udvlnht56thl7q1nvsaujaqxehav1nz1gdnlckxwood8segen8oxfxynuii8g4s0d0xq4qtk1w9wkpm0ntt1pem9mw1ipo1lpa3pywec2cqypabqkli6z0uprcjror02v4vnx5kvaimh6buhnlai8sf8m9m8b3md7fbzokgev90l7qsa343wevivpll5lme2qfg1y2jbu695w4xhup3pnrkwf1hem9b2qwi69u1r49mocwp5kwejj5v0kbf6d8utvg9c3lb4ocjvc9dp5umd41x1eyu1ai4yaqwomx23kmrju0dgeptddkbmccoevcb4t5vsu56hgtcwvc5iqnk6o7ev798dqh72q1b3ge1xci3n42laaunlgsgnx6772tndqh7telghuj34ioat7tb0omsxcsr49gp2g47r6lbmnw8mknnbawvmcak2tvof7spqd2qs7qhy1fpazx9fj162qnik05steyrlibd97g1ztsarptjeqtgowhcgg0l3t3gahg4w0zdvgocqlvws5mvz8z5hhe7gyp2612sljmt33f61drvko8ewlxrr6ujqaup9o7zlbslst4ff7lp0v3m5kxgto9sp55p3n6jikwb2rbnj2ipcdi75aoiekqn7011rw9zck2e0hnm29ipmmq8qtcj3qvd7lpztyuuhlvutjar2hhfyiaz01we28ytpzybkoawldpx0f7sl8it9uyvcm1z7qw6f3sg1qlaeobu7fp3yjnqk3nyjg59daya47iampo5g1tzki3ls8c0cwgp1j7d1rri43vfppj2h863wclg9hng97jv4xxcpmpd3os1remtc0yizicpw0odypmovfmeqxzhf4p9a8a4zw8aj0581doyb391ejwx9y2522mk814724chwxw7kdomqdl82k36c5wshc38u426by5qm2quej0tmmjgsts0px5zzm011mn38h17yd2b8n9ufuo9y7zgnxzjwcrx8hc9styw2255dfaebx5h356mmrpcrvgd36fqbufcc93mmp4z8faxi80ovwi07afgnwdczdxtv1nk1wh3bhklak8x1o08tknd9j5228fmgq3n9eyjonzmx7mou0r61y6xiojjjz3kr2gu3duhvuiw1kukppaeq5adt9jmy5zjhqt3mvsoh31etielxf7b7clwic5932ril5eh78i8v5ev9duru7gbmiypx2t55az89p7bvgyqb7q7xljqt5pgig4k0t1r1nthwlpxibmzggweodlbccqmirnhr0a94bqzx71usop8yvmom46w5czflde2d71nl3me0ap70eib8bvgjfi06ec374a0es5fafblecndzrur5qxlj9npx8pzke4q49j8r64cmtlw9gj7nrga6728dsoqo8phgjbwz4sbygctue0xmdvq9vz1teyryju8925jdzaxsmo8ifmxdxtfi8iu8uhhchugkchbzmc8eozoe6qeics72lp46gqiwqdpbn2mwlnjo9ubpayku20s0g0gklkfls6sqh1oc2n3e235s2q0t1winggy2v2w7lwfs6y8otff5uj42 00:11:15.761 16:03:45 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:11:15.761 16:03:45 -- dd/basic_rw.sh@59 -- # gen_conf 00:11:15.761 16:03:45 -- dd/common.sh@31 -- # xtrace_disable 00:11:15.761 16:03:45 -- common/autotest_common.sh@10 -- # set +x 00:11:15.761 { 00:11:15.761 "subsystems": [ 00:11:15.761 { 00:11:15.761 "subsystem": "bdev", 00:11:15.761 "config": [ 00:11:15.761 { 00:11:15.761 "params": { 00:11:15.761 "trtype": "pcie", 00:11:15.761 "traddr": "0000:00:10.0", 00:11:15.761 "name": "Nvme0" 00:11:15.761 }, 00:11:15.761 "method": "bdev_nvme_attach_controller" 00:11:15.761 }, 00:11:15.761 { 00:11:15.761 "method": "bdev_wait_for_examine" 00:11:15.761 } 00:11:15.761 ] 00:11:15.761 } 00:11:15.761 ] 00:11:15.761 } 00:11:15.761 [2024-04-15 16:03:45.630912] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:15.761 [2024-04-15 16:03:45.631014] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75299 ] 00:11:16.019 [2024-04-15 16:03:45.778227] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.019 [2024-04-15 16:03:45.834270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.019 [2024-04-15 16:03:45.835130] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:16.277  Copying: 4096/4096 [B] (average 4000 kBps) 00:11:16.277 00:11:16.277 16:03:46 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:11:16.277 16:03:46 -- dd/basic_rw.sh@65 -- # gen_conf 00:11:16.277 16:03:46 -- dd/common.sh@31 -- # xtrace_disable 00:11:16.277 16:03:46 -- common/autotest_common.sh@10 -- # set +x 00:11:16.562 [2024-04-15 16:03:46.268479] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:16.562 [2024-04-15 16:03:46.268589] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75312 ] 00:11:16.562 { 00:11:16.562 "subsystems": [ 00:11:16.562 { 00:11:16.562 "subsystem": "bdev", 00:11:16.562 "config": [ 00:11:16.562 { 00:11:16.562 "params": { 00:11:16.562 "trtype": "pcie", 00:11:16.562 "traddr": "0000:00:10.0", 00:11:16.562 "name": "Nvme0" 00:11:16.562 }, 00:11:16.562 "method": "bdev_nvme_attach_controller" 00:11:16.562 }, 00:11:16.562 { 00:11:16.562 "method": "bdev_wait_for_examine" 00:11:16.562 } 00:11:16.562 ] 00:11:16.562 } 00:11:16.562 ] 00:11:16.562 } 00:11:16.562 [2024-04-15 16:03:46.408289] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.562 [2024-04-15 16:03:46.463008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.562 [2024-04-15 16:03:46.463867] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:17.080  Copying: 4096/4096 [B] (average 4000 kBps) 00:11:17.080 00:11:17.080 16:03:46 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:11:17.080 ************************************ 00:11:17.080 END TEST dd_rw_offset 00:11:17.080 ************************************ 00:11:17.080 16:03:46 -- dd/basic_rw.sh@72 -- # [[ 2smcnnoz7vls7qtwcjjmga8bl6uvavc9rie9fb0xqux9retkgo03i7qtyzcrv0d0w6y5059r92wbjfz4hvll4p6m9ffxim8uim57ma8tyeecesk332r787qw2msqszml2vzzgroegkf9q9no3rks2uh06tr9eb7s3sq2xuosu9ra3jjtnkfzsj83o9gjxtvr5yw1qcw7y34bnxlipwf5i1kr0utci030cljk0f6qq2lb6njk18pdbk3myh2516c59siwlrpopvm0jpw17e5p3lksyl4ibilgjsv1s1n6au1iiofw89vnu14delbc6sc1m9qr30g60l7q8ho4wj314ntujb4xihx2iw8o3bxiqjf1oq0qif3fwp4wnpl52j6bmxc6it9ud9z0cz72bhau1elh0dc0vj02qpuhkyxtl93n7a9bvml9h9i57f8na7fvai8r89dhktxzqjs98gd2qp2qxootq0momhmdca3jyu7bbjg9zvvqcosqevn3u2ww1w7tzleua2az86boeez2p36tybmzlyvhi4zii4msdhqciseidzmynp6e7bo0ehqw6jstd9afhss8j6c6jmbofwgdw663v7aibkhvwjlh8unyyd9zrtlqzujpzjwg5l2b7gozvowx4i0tlhcw4m3jrl5ellull4pjzxgc1spv376kmeulk8lilh0dypnnj6t51aarq8xw36cqggaj78ugtv3ko1jc93h9aiq82qj9vgu5cmiec1l9etkjxprtgvlipmia6g34g3fcxtr56x9mspr78ll4ruz8nz3b37x8qyocb981nlxy96mdwi1fk0l8h5uojk5bym5si9uveq95l2hnwkhhwpavs3mxeu7ggez8etu8qh9mqf1barvbx7ltoujs9r3a5yrhvjqf8h3jvxhn5touyyr3q6jnkpm0ehlomw7vrltgzcc1m8j659cfw38cp5yyegcg0oimh8elc8nlwgbqiw3ub3ac4wh41o8mbprxdp0imez9y9gci5hcm10adf9xaqkxc0dwjqtlkf6sonsgd9dxkeqfztnsmoir10h5t80jugxuu1fdj5arl6mn8nxsxoevuptemlaco1ckd1dqh1zq5wi6pty6a9s3m4t6anjj7dpo5hqknsxot964ymsv7u85nsz1n11tmcznh8comxvq0xhzw2dkt38p6ssimqwyskc4lhv4sb0ra6w8d37ke38qohhyonknhvx6telpdik5997yw3941uazoeqnaii2vy0vvwa3vdqiep97iugyna3b6pjl2b9vhy6nmjcuagv2pedi30xwj8aeazb7zkkwx4k2x7pqsu4e9mqel02ll15kp6dac1sg5f1nhcftiqaypu6ug1mxsr3w189u4tngm2bqof5r28aaqnrdq4b56gu4kbxh4qjwckegyc634duz7en5q3nc2fnf71qk61lyfzj73rdzatr5xdtakar5no2henn9t0xlnnxuaz0tuvlc0uneodqevcwpjq3jw3fird6vntop23gx49ie4esgul5caf8femq9japlnhz1jm1yo95jecyad77db22e1gxi911cztme75uymt6gtw6yigrkd2yritnq5881jk2w7k9jebrl4vcodg5po3kl6e46c0a844o8qe6vv8y4qs7b46sb5v5sawo2kie5f9dalgsm70dvitqguh6cxmk248nzamz3ly6n1c6pau2olgfszjhsn2k6duev2rkjmg0ljj6ts01xibgnm85rdt67z535ugv8ysu92jkhpy0u6mkhp597vrg92g1mj497b5twexuzf1vxlrqi8knvyl2eiv8z6u87lku5uzg9kkfnrr92ys5r2jl3aoz6r1vflrp37ytgfurlb3jn19sq1djrahcvxgwiuc8fmm3tmhkv3tj8cd4z21xi0o3ex9u94f5beoad2efy4gvnhhqe53ylq9q6pgvyr5h8hdnbb6nvixtkpqx2d9hld8d9gvcffkaokzl0zubogt4ydxg2vnkbo2q2m952m440c4t88q3i2dsw3qwlxxgb95thveauzdx34ty73mynpkvz37co38sr38t0e767q3jhy56ghd6py9mff1mvixoyvnw4xqgcql1dvgcrb26u53c7e39ihmpzbu9wux70rckihimyqzcze6d35ftziuf1m0mlx1utuvxm6mxzndrvljk1mplhp7x4o9vteqp6s0285w67vyb19ac1zgrfw3locrhofy9m05g43b6gilec85xlf53stt8lpglt8pdxzqsty7ow415dzy8jyq2s9cn2d84l1o4ezktknp62fhjf7rnf9vgddeh06j8sapml6xsfhen7cfxad6grbg02htq3vw8l3wdixixg0yeia70tv8y0gxu962jhsu43jqzyej5mm9i4w0r5c1se7qd27eizsh67mbhr2i7amz6ejn243il560nmg7jq20eifhynad4iupgny3d3dq6w2mca53owpli9udvlnht56thl7q1nvsaujaqxehav1nz1gdnlckxwood8segen8oxfxynuii8g4s0d0xq4qtk1w9wkpm0ntt1pem9mw1ipo1lpa3pywec2cqypabqkli6z0uprcjror02v4vnx5kvaimh6buhnlai8sf8m9m8b3md7fbzokgev90l7qsa343wevivpll5lme2qfg1y2jbu695w4xhup3pnrkwf1hem9b2qwi69u1r49mocwp5kwejj5v0kbf6d8utvg9c3lb4ocjvc9dp5umd41x1eyu1ai4yaqwomx23kmrju0dgeptddkbmccoevcb4t5vsu56hgtcwvc5iqnk6o7ev798dqh72q1b3ge1xci3n42laaunlgsgnx6772tndqh7telghuj34ioat7tb0omsxcsr49gp2g47r6lbmnw8mknnbawvmcak2tvof7spqd2qs7qhy1fpazx9fj162qnik05steyrlibd97g1ztsarptjeqtgowhcgg0l3t3gahg4w0zdvgocqlvws5mvz8z5hhe7gyp2612sljmt33f61drvko8ewlxrr6ujqaup9o7zlbslst4ff7lp0v3m5kxgto9sp55p3n6jikwb2rbnj2ipcdi75aoiekqn7011rw9zck2e0hnm29ipmmq8qtcj3qvd7lpztyuuhlvutjar2hhfyiaz01we28ytpzybkoawldpx0f7sl8it9uyvcm1z7qw6f3sg1qlaeobu7fp3yjnqk3nyjg59daya47iampo5g1tzki3ls8c0cwgp1j7d1rri43vfppj2h863wclg9hng97jv4xxcpmpd3os1remtc0yizicpw0odypmovfmeqxzhf4p9a8a4zw8aj0581doyb391ejwx9y2522mk814724chwxw7kdomqdl82k36c5wshc38u426by5qm2quej0tmmjgsts0px5zzm011mn38h17yd2b8n9ufuo9y7zgnxzjwcrx8hc9styw2255dfaebx5h356mmrpcrvgd36fqbufcc93mmp4z8faxi80ovwi07afgnwdczdxtv1nk1wh3bhklak8x1o08tknd9j5228fmgq3n9eyjonzmx7mou0r61y6xiojjjz3kr2gu3duhvuiw1kukppaeq5adt9jmy5zjhqt3mvsoh31etielxf7b7clwic5932ril5eh78i8v5ev9duru7gbmiypx2t55az89p7bvgyqb7q7xljqt5pgig4k0t1r1nthwlpxibmzggweodlbccqmirnhr0a94bqzx71usop8yvmom46w5czflde2d71nl3me0ap70eib8bvgjfi06ec374a0es5fafblecndzrur5qxlj9npx8pzke4q49j8r64cmtlw9gj7nrga6728dsoqo8phgjbwz4sbygctue0xmdvq9vz1teyryju8925jdzaxsmo8ifmxdxtfi8iu8uhhchugkchbzmc8eozoe6qeics72lp46gqiwqdpbn2mwlnjo9ubpayku20s0g0gklkfls6sqh1oc2n3e235s2q0t1winggy2v2w7lwfs6y8otff5uj42 == \2\s\m\c\n\n\o\z\7\v\l\s\7\q\t\w\c\j\j\m\g\a\8\b\l\6\u\v\a\v\c\9\r\i\e\9\f\b\0\x\q\u\x\9\r\e\t\k\g\o\0\3\i\7\q\t\y\z\c\r\v\0\d\0\w\6\y\5\0\5\9\r\9\2\w\b\j\f\z\4\h\v\l\l\4\p\6\m\9\f\f\x\i\m\8\u\i\m\5\7\m\a\8\t\y\e\e\c\e\s\k\3\3\2\r\7\8\7\q\w\2\m\s\q\s\z\m\l\2\v\z\z\g\r\o\e\g\k\f\9\q\9\n\o\3\r\k\s\2\u\h\0\6\t\r\9\e\b\7\s\3\s\q\2\x\u\o\s\u\9\r\a\3\j\j\t\n\k\f\z\s\j\8\3\o\9\g\j\x\t\v\r\5\y\w\1\q\c\w\7\y\3\4\b\n\x\l\i\p\w\f\5\i\1\k\r\0\u\t\c\i\0\3\0\c\l\j\k\0\f\6\q\q\2\l\b\6\n\j\k\1\8\p\d\b\k\3\m\y\h\2\5\1\6\c\5\9\s\i\w\l\r\p\o\p\v\m\0\j\p\w\1\7\e\5\p\3\l\k\s\y\l\4\i\b\i\l\g\j\s\v\1\s\1\n\6\a\u\1\i\i\o\f\w\8\9\v\n\u\1\4\d\e\l\b\c\6\s\c\1\m\9\q\r\3\0\g\6\0\l\7\q\8\h\o\4\w\j\3\1\4\n\t\u\j\b\4\x\i\h\x\2\i\w\8\o\3\b\x\i\q\j\f\1\o\q\0\q\i\f\3\f\w\p\4\w\n\p\l\5\2\j\6\b\m\x\c\6\i\t\9\u\d\9\z\0\c\z\7\2\b\h\a\u\1\e\l\h\0\d\c\0\v\j\0\2\q\p\u\h\k\y\x\t\l\9\3\n\7\a\9\b\v\m\l\9\h\9\i\5\7\f\8\n\a\7\f\v\a\i\8\r\8\9\d\h\k\t\x\z\q\j\s\9\8\g\d\2\q\p\2\q\x\o\o\t\q\0\m\o\m\h\m\d\c\a\3\j\y\u\7\b\b\j\g\9\z\v\v\q\c\o\s\q\e\v\n\3\u\2\w\w\1\w\7\t\z\l\e\u\a\2\a\z\8\6\b\o\e\e\z\2\p\3\6\t\y\b\m\z\l\y\v\h\i\4\z\i\i\4\m\s\d\h\q\c\i\s\e\i\d\z\m\y\n\p\6\e\7\b\o\0\e\h\q\w\6\j\s\t\d\9\a\f\h\s\s\8\j\6\c\6\j\m\b\o\f\w\g\d\w\6\6\3\v\7\a\i\b\k\h\v\w\j\l\h\8\u\n\y\y\d\9\z\r\t\l\q\z\u\j\p\z\j\w\g\5\l\2\b\7\g\o\z\v\o\w\x\4\i\0\t\l\h\c\w\4\m\3\j\r\l\5\e\l\l\u\l\l\4\p\j\z\x\g\c\1\s\p\v\3\7\6\k\m\e\u\l\k\8\l\i\l\h\0\d\y\p\n\n\j\6\t\5\1\a\a\r\q\8\x\w\3\6\c\q\g\g\a\j\7\8\u\g\t\v\3\k\o\1\j\c\9\3\h\9\a\i\q\8\2\q\j\9\v\g\u\5\c\m\i\e\c\1\l\9\e\t\k\j\x\p\r\t\g\v\l\i\p\m\i\a\6\g\3\4\g\3\f\c\x\t\r\5\6\x\9\m\s\p\r\7\8\l\l\4\r\u\z\8\n\z\3\b\3\7\x\8\q\y\o\c\b\9\8\1\n\l\x\y\9\6\m\d\w\i\1\f\k\0\l\8\h\5\u\o\j\k\5\b\y\m\5\s\i\9\u\v\e\q\9\5\l\2\h\n\w\k\h\h\w\p\a\v\s\3\m\x\e\u\7\g\g\e\z\8\e\t\u\8\q\h\9\m\q\f\1\b\a\r\v\b\x\7\l\t\o\u\j\s\9\r\3\a\5\y\r\h\v\j\q\f\8\h\3\j\v\x\h\n\5\t\o\u\y\y\r\3\q\6\j\n\k\p\m\0\e\h\l\o\m\w\7\v\r\l\t\g\z\c\c\1\m\8\j\6\5\9\c\f\w\3\8\c\p\5\y\y\e\g\c\g\0\o\i\m\h\8\e\l\c\8\n\l\w\g\b\q\i\w\3\u\b\3\a\c\4\w\h\4\1\o\8\m\b\p\r\x\d\p\0\i\m\e\z\9\y\9\g\c\i\5\h\c\m\1\0\a\d\f\9\x\a\q\k\x\c\0\d\w\j\q\t\l\k\f\6\s\o\n\s\g\d\9\d\x\k\e\q\f\z\t\n\s\m\o\i\r\1\0\h\5\t\8\0\j\u\g\x\u\u\1\f\d\j\5\a\r\l\6\m\n\8\n\x\s\x\o\e\v\u\p\t\e\m\l\a\c\o\1\c\k\d\1\d\q\h\1\z\q\5\w\i\6\p\t\y\6\a\9\s\3\m\4\t\6\a\n\j\j\7\d\p\o\5\h\q\k\n\s\x\o\t\9\6\4\y\m\s\v\7\u\8\5\n\s\z\1\n\1\1\t\m\c\z\n\h\8\c\o\m\x\v\q\0\x\h\z\w\2\d\k\t\3\8\p\6\s\s\i\m\q\w\y\s\k\c\4\l\h\v\4\s\b\0\r\a\6\w\8\d\3\7\k\e\3\8\q\o\h\h\y\o\n\k\n\h\v\x\6\t\e\l\p\d\i\k\5\9\9\7\y\w\3\9\4\1\u\a\z\o\e\q\n\a\i\i\2\v\y\0\v\v\w\a\3\v\d\q\i\e\p\9\7\i\u\g\y\n\a\3\b\6\p\j\l\2\b\9\v\h\y\6\n\m\j\c\u\a\g\v\2\p\e\d\i\3\0\x\w\j\8\a\e\a\z\b\7\z\k\k\w\x\4\k\2\x\7\p\q\s\u\4\e\9\m\q\e\l\0\2\l\l\1\5\k\p\6\d\a\c\1\s\g\5\f\1\n\h\c\f\t\i\q\a\y\p\u\6\u\g\1\m\x\s\r\3\w\1\8\9\u\4\t\n\g\m\2\b\q\o\f\5\r\2\8\a\a\q\n\r\d\q\4\b\5\6\g\u\4\k\b\x\h\4\q\j\w\c\k\e\g\y\c\6\3\4\d\u\z\7\e\n\5\q\3\n\c\2\f\n\f\7\1\q\k\6\1\l\y\f\z\j\7\3\r\d\z\a\t\r\5\x\d\t\a\k\a\r\5\n\o\2\h\e\n\n\9\t\0\x\l\n\n\x\u\a\z\0\t\u\v\l\c\0\u\n\e\o\d\q\e\v\c\w\p\j\q\3\j\w\3\f\i\r\d\6\v\n\t\o\p\2\3\g\x\4\9\i\e\4\e\s\g\u\l\5\c\a\f\8\f\e\m\q\9\j\a\p\l\n\h\z\1\j\m\1\y\o\9\5\j\e\c\y\a\d\7\7\d\b\2\2\e\1\g\x\i\9\1\1\c\z\t\m\e\7\5\u\y\m\t\6\g\t\w\6\y\i\g\r\k\d\2\y\r\i\t\n\q\5\8\8\1\j\k\2\w\7\k\9\j\e\b\r\l\4\v\c\o\d\g\5\p\o\3\k\l\6\e\4\6\c\0\a\8\4\4\o\8\q\e\6\v\v\8\y\4\q\s\7\b\4\6\s\b\5\v\5\s\a\w\o\2\k\i\e\5\f\9\d\a\l\g\s\m\7\0\d\v\i\t\q\g\u\h\6\c\x\m\k\2\4\8\n\z\a\m\z\3\l\y\6\n\1\c\6\p\a\u\2\o\l\g\f\s\z\j\h\s\n\2\k\6\d\u\e\v\2\r\k\j\m\g\0\l\j\j\6\t\s\0\1\x\i\b\g\n\m\8\5\r\d\t\6\7\z\5\3\5\u\g\v\8\y\s\u\9\2\j\k\h\p\y\0\u\6\m\k\h\p\5\9\7\v\r\g\9\2\g\1\m\j\4\9\7\b\5\t\w\e\x\u\z\f\1\v\x\l\r\q\i\8\k\n\v\y\l\2\e\i\v\8\z\6\u\8\7\l\k\u\5\u\z\g\9\k\k\f\n\r\r\9\2\y\s\5\r\2\j\l\3\a\o\z\6\r\1\v\f\l\r\p\3\7\y\t\g\f\u\r\l\b\3\j\n\1\9\s\q\1\d\j\r\a\h\c\v\x\g\w\i\u\c\8\f\m\m\3\t\m\h\k\v\3\t\j\8\c\d\4\z\2\1\x\i\0\o\3\e\x\9\u\9\4\f\5\b\e\o\a\d\2\e\f\y\4\g\v\n\h\h\q\e\5\3\y\l\q\9\q\6\p\g\v\y\r\5\h\8\h\d\n\b\b\6\n\v\i\x\t\k\p\q\x\2\d\9\h\l\d\8\d\9\g\v\c\f\f\k\a\o\k\z\l\0\z\u\b\o\g\t\4\y\d\x\g\2\v\n\k\b\o\2\q\2\m\9\5\2\m\4\4\0\c\4\t\8\8\q\3\i\2\d\s\w\3\q\w\l\x\x\g\b\9\5\t\h\v\e\a\u\z\d\x\3\4\t\y\7\3\m\y\n\p\k\v\z\3\7\c\o\3\8\s\r\3\8\t\0\e\7\6\7\q\3\j\h\y\5\6\g\h\d\6\p\y\9\m\f\f\1\m\v\i\x\o\y\v\n\w\4\x\q\g\c\q\l\1\d\v\g\c\r\b\2\6\u\5\3\c\7\e\3\9\i\h\m\p\z\b\u\9\w\u\x\7\0\r\c\k\i\h\i\m\y\q\z\c\z\e\6\d\3\5\f\t\z\i\u\f\1\m\0\m\l\x\1\u\t\u\v\x\m\6\m\x\z\n\d\r\v\l\j\k\1\m\p\l\h\p\7\x\4\o\9\v\t\e\q\p\6\s\0\2\8\5\w\6\7\v\y\b\1\9\a\c\1\z\g\r\f\w\3\l\o\c\r\h\o\f\y\9\m\0\5\g\4\3\b\6\g\i\l\e\c\8\5\x\l\f\5\3\s\t\t\8\l\p\g\l\t\8\p\d\x\z\q\s\t\y\7\o\w\4\1\5\d\z\y\8\j\y\q\2\s\9\c\n\2\d\8\4\l\1\o\4\e\z\k\t\k\n\p\6\2\f\h\j\f\7\r\n\f\9\v\g\d\d\e\h\0\6\j\8\s\a\p\m\l\6\x\s\f\h\e\n\7\c\f\x\a\d\6\g\r\b\g\0\2\h\t\q\3\v\w\8\l\3\w\d\i\x\i\x\g\0\y\e\i\a\7\0\t\v\8\y\0\g\x\u\9\6\2\j\h\s\u\4\3\j\q\z\y\e\j\5\m\m\9\i\4\w\0\r\5\c\1\s\e\7\q\d\2\7\e\i\z\s\h\6\7\m\b\h\r\2\i\7\a\m\z\6\e\j\n\2\4\3\i\l\5\6\0\n\m\g\7\j\q\2\0\e\i\f\h\y\n\a\d\4\i\u\p\g\n\y\3\d\3\d\q\6\w\2\m\c\a\5\3\o\w\p\l\i\9\u\d\v\l\n\h\t\5\6\t\h\l\7\q\1\n\v\s\a\u\j\a\q\x\e\h\a\v\1\n\z\1\g\d\n\l\c\k\x\w\o\o\d\8\s\e\g\e\n\8\o\x\f\x\y\n\u\i\i\8\g\4\s\0\d\0\x\q\4\q\t\k\1\w\9\w\k\p\m\0\n\t\t\1\p\e\m\9\m\w\1\i\p\o\1\l\p\a\3\p\y\w\e\c\2\c\q\y\p\a\b\q\k\l\i\6\z\0\u\p\r\c\j\r\o\r\0\2\v\4\v\n\x\5\k\v\a\i\m\h\6\b\u\h\n\l\a\i\8\s\f\8\m\9\m\8\b\3\m\d\7\f\b\z\o\k\g\e\v\9\0\l\7\q\s\a\3\4\3\w\e\v\i\v\p\l\l\5\l\m\e\2\q\f\g\1\y\2\j\b\u\6\9\5\w\4\x\h\u\p\3\p\n\r\k\w\f\1\h\e\m\9\b\2\q\w\i\6\9\u\1\r\4\9\m\o\c\w\p\5\k\w\e\j\j\5\v\0\k\b\f\6\d\8\u\t\v\g\9\c\3\l\b\4\o\c\j\v\c\9\d\p\5\u\m\d\4\1\x\1\e\y\u\1\a\i\4\y\a\q\w\o\m\x\2\3\k\m\r\j\u\0\d\g\e\p\t\d\d\k\b\m\c\c\o\e\v\c\b\4\t\5\v\s\u\5\6\h\g\t\c\w\v\c\5\i\q\n\k\6\o\7\e\v\7\9\8\d\q\h\7\2\q\1\b\3\g\e\1\x\c\i\3\n\4\2\l\a\a\u\n\l\g\s\g\n\x\6\7\7\2\t\n\d\q\h\7\t\e\l\g\h\u\j\3\4\i\o\a\t\7\t\b\0\o\m\s\x\c\s\r\4\9\g\p\2\g\4\7\r\6\l\b\m\n\w\8\m\k\n\n\b\a\w\v\m\c\a\k\2\t\v\o\f\7\s\p\q\d\2\q\s\7\q\h\y\1\f\p\a\z\x\9\f\j\1\6\2\q\n\i\k\0\5\s\t\e\y\r\l\i\b\d\9\7\g\1\z\t\s\a\r\p\t\j\e\q\t\g\o\w\h\c\g\g\0\l\3\t\3\g\a\h\g\4\w\0\z\d\v\g\o\c\q\l\v\w\s\5\m\v\z\8\z\5\h\h\e\7\g\y\p\2\6\1\2\s\l\j\m\t\3\3\f\6\1\d\r\v\k\o\8\e\w\l\x\r\r\6\u\j\q\a\u\p\9\o\7\z\l\b\s\l\s\t\4\f\f\7\l\p\0\v\3\m\5\k\x\g\t\o\9\s\p\5\5\p\3\n\6\j\i\k\w\b\2\r\b\n\j\2\i\p\c\d\i\7\5\a\o\i\e\k\q\n\7\0\1\1\r\w\9\z\c\k\2\e\0\h\n\m\2\9\i\p\m\m\q\8\q\t\c\j\3\q\v\d\7\l\p\z\t\y\u\u\h\l\v\u\t\j\a\r\2\h\h\f\y\i\a\z\0\1\w\e\2\8\y\t\p\z\y\b\k\o\a\w\l\d\p\x\0\f\7\s\l\8\i\t\9\u\y\v\c\m\1\z\7\q\w\6\f\3\s\g\1\q\l\a\e\o\b\u\7\f\p\3\y\j\n\q\k\3\n\y\j\g\5\9\d\a\y\a\4\7\i\a\m\p\o\5\g\1\t\z\k\i\3\l\s\8\c\0\c\w\g\p\1\j\7\d\1\r\r\i\4\3\v\f\p\p\j\2\h\8\6\3\w\c\l\g\9\h\n\g\9\7\j\v\4\x\x\c\p\m\p\d\3\o\s\1\r\e\m\t\c\0\y\i\z\i\c\p\w\0\o\d\y\p\m\o\v\f\m\e\q\x\z\h\f\4\p\9\a\8\a\4\z\w\8\a\j\0\5\8\1\d\o\y\b\3\9\1\e\j\w\x\9\y\2\5\2\2\m\k\8\1\4\7\2\4\c\h\w\x\w\7\k\d\o\m\q\d\l\8\2\k\3\6\c\5\w\s\h\c\3\8\u\4\2\6\b\y\5\q\m\2\q\u\e\j\0\t\m\m\j\g\s\t\s\0\p\x\5\z\z\m\0\1\1\m\n\3\8\h\1\7\y\d\2\b\8\n\9\u\f\u\o\9\y\7\z\g\n\x\z\j\w\c\r\x\8\h\c\9\s\t\y\w\2\2\5\5\d\f\a\e\b\x\5\h\3\5\6\m\m\r\p\c\r\v\g\d\3\6\f\q\b\u\f\c\c\9\3\m\m\p\4\z\8\f\a\x\i\8\0\o\v\w\i\0\7\a\f\g\n\w\d\c\z\d\x\t\v\1\n\k\1\w\h\3\b\h\k\l\a\k\8\x\1\o\0\8\t\k\n\d\9\j\5\2\2\8\f\m\g\q\3\n\9\e\y\j\o\n\z\m\x\7\m\o\u\0\r\6\1\y\6\x\i\o\j\j\j\z\3\k\r\2\g\u\3\d\u\h\v\u\i\w\1\k\u\k\p\p\a\e\q\5\a\d\t\9\j\m\y\5\z\j\h\q\t\3\m\v\s\o\h\3\1\e\t\i\e\l\x\f\7\b\7\c\l\w\i\c\5\9\3\2\r\i\l\5\e\h\7\8\i\8\v\5\e\v\9\d\u\r\u\7\g\b\m\i\y\p\x\2\t\5\5\a\z\8\9\p\7\b\v\g\y\q\b\7\q\7\x\l\j\q\t\5\p\g\i\g\4\k\0\t\1\r\1\n\t\h\w\l\p\x\i\b\m\z\g\g\w\e\o\d\l\b\c\c\q\m\i\r\n\h\r\0\a\9\4\b\q\z\x\7\1\u\s\o\p\8\y\v\m\o\m\4\6\w\5\c\z\f\l\d\e\2\d\7\1\n\l\3\m\e\0\a\p\7\0\e\i\b\8\b\v\g\j\f\i\0\6\e\c\3\7\4\a\0\e\s\5\f\a\f\b\l\e\c\n\d\z\r\u\r\5\q\x\l\j\9\n\p\x\8\p\z\k\e\4\q\4\9\j\8\r\6\4\c\m\t\l\w\9\g\j\7\n\r\g\a\6\7\2\8\d\s\o\q\o\8\p\h\g\j\b\w\z\4\s\b\y\g\c\t\u\e\0\x\m\d\v\q\9\v\z\1\t\e\y\r\y\j\u\8\9\2\5\j\d\z\a\x\s\m\o\8\i\f\m\x\d\x\t\f\i\8\i\u\8\u\h\h\c\h\u\g\k\c\h\b\z\m\c\8\e\o\z\o\e\6\q\e\i\c\s\7\2\l\p\4\6\g\q\i\w\q\d\p\b\n\2\m\w\l\n\j\o\9\u\b\p\a\y\k\u\2\0\s\0\g\0\g\k\l\k\f\l\s\6\s\q\h\1\o\c\2\n\3\e\2\3\5\s\2\q\0\t\1\w\i\n\g\g\y\2\v\2\w\7\l\w\f\s\6\y\8\o\t\f\f\5\u\j\4\2 ]] 00:11:17.080 00:11:17.080 real 0m1.283s 00:11:17.080 user 0m0.853s 00:11:17.080 sys 0m0.591s 00:11:17.080 16:03:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:17.080 16:03:46 -- common/autotest_common.sh@10 -- # set +x 00:11:17.080 16:03:46 -- dd/basic_rw.sh@1 -- # cleanup 00:11:17.080 16:03:46 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:11:17.080 16:03:46 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:11:17.080 16:03:46 -- dd/common.sh@11 -- # local nvme_ref= 00:11:17.080 16:03:46 -- dd/common.sh@12 -- # local size=0xffff 00:11:17.080 16:03:46 -- dd/common.sh@14 -- # local bs=1048576 00:11:17.080 16:03:46 -- dd/common.sh@15 -- # local count=1 00:11:17.080 16:03:46 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:11:17.080 16:03:46 -- dd/common.sh@18 -- # gen_conf 00:11:17.080 16:03:46 -- dd/common.sh@31 -- # xtrace_disable 00:11:17.080 16:03:46 -- common/autotest_common.sh@10 -- # set +x 00:11:17.080 { 00:11:17.080 "subsystems": [ 00:11:17.080 { 00:11:17.080 "subsystem": "bdev", 00:11:17.080 "config": [ 00:11:17.080 { 00:11:17.080 "params": { 00:11:17.080 "trtype": "pcie", 00:11:17.080 "traddr": "0000:00:10.0", 00:11:17.080 "name": "Nvme0" 00:11:17.080 }, 00:11:17.080 "method": "bdev_nvme_attach_controller" 00:11:17.080 }, 00:11:17.080 { 00:11:17.080 "method": "bdev_wait_for_examine" 00:11:17.080 } 00:11:17.080 ] 00:11:17.080 } 00:11:17.080 ] 00:11:17.080 } 00:11:17.080 [2024-04-15 16:03:46.900737] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:17.080 [2024-04-15 16:03:46.900870] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75342 ] 00:11:17.356 [2024-04-15 16:03:47.045502] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:17.356 [2024-04-15 16:03:47.094537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.356 [2024-04-15 16:03:47.095248] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:17.614  Copying: 1024/1024 [kB] (average 1000 MBps) 00:11:17.614 00:11:17.614 16:03:47 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:17.614 00:11:17.614 real 0m17.085s 00:11:17.614 user 0m11.541s 00:11:17.614 sys 0m6.554s 00:11:17.614 16:03:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:17.614 16:03:47 -- common/autotest_common.sh@10 -- # set +x 00:11:17.614 ************************************ 00:11:17.614 END TEST spdk_dd_basic_rw 00:11:17.614 ************************************ 00:11:17.614 16:03:47 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:11:17.614 16:03:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:17.614 16:03:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:17.614 16:03:47 -- common/autotest_common.sh@10 -- # set +x 00:11:17.614 ************************************ 00:11:17.614 START TEST spdk_dd_posix 00:11:17.614 ************************************ 00:11:17.614 16:03:47 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:11:17.876 * Looking for test storage... 00:11:17.876 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:11:17.876 16:03:47 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:17.876 16:03:47 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:17.876 16:03:47 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:17.876 16:03:47 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:17.876 16:03:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.876 16:03:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.876 16:03:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.876 16:03:47 -- paths/export.sh@5 -- # export PATH 00:11:17.876 16:03:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.876 16:03:47 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:11:17.876 16:03:47 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:11:17.876 16:03:47 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:11:17.876 16:03:47 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:11:17.876 16:03:47 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:17.876 16:03:47 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:17.876 16:03:47 -- dd/posix.sh@130 -- # tests 00:11:17.876 16:03:47 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:11:17.877 * First test run, liburing in use 00:11:17.877 16:03:47 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:11:17.877 16:03:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:17.877 16:03:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:17.877 16:03:47 -- common/autotest_common.sh@10 -- # set +x 00:11:17.877 ************************************ 00:11:17.877 START TEST dd_flag_append 00:11:17.877 ************************************ 00:11:17.877 16:03:47 -- common/autotest_common.sh@1111 -- # append 00:11:17.877 16:03:47 -- dd/posix.sh@16 -- # local dump0 00:11:17.877 16:03:47 -- dd/posix.sh@17 -- # local dump1 00:11:17.877 16:03:47 -- dd/posix.sh@19 -- # gen_bytes 32 00:11:17.877 16:03:47 -- dd/common.sh@98 -- # xtrace_disable 00:11:17.877 16:03:47 -- common/autotest_common.sh@10 -- # set +x 00:11:17.877 16:03:47 -- dd/posix.sh@19 -- # dump0=0r9uu9cslvnd2mmhpce6f0cij6yqx9xv 00:11:17.877 16:03:47 -- dd/posix.sh@20 -- # gen_bytes 32 00:11:17.877 16:03:47 -- dd/common.sh@98 -- # xtrace_disable 00:11:17.877 16:03:47 -- common/autotest_common.sh@10 -- # set +x 00:11:17.877 16:03:47 -- dd/posix.sh@20 -- # dump1=fz6e6ukfzq6ftnr56rmfbossxlxkiu9b 00:11:17.877 16:03:47 -- dd/posix.sh@22 -- # printf %s 0r9uu9cslvnd2mmhpce6f0cij6yqx9xv 00:11:17.877 16:03:47 -- dd/posix.sh@23 -- # printf %s fz6e6ukfzq6ftnr56rmfbossxlxkiu9b 00:11:17.877 16:03:47 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:11:17.877 [2024-04-15 16:03:47.769989] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:17.877 [2024-04-15 16:03:47.770088] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75415 ] 00:11:18.135 [2024-04-15 16:03:47.905529] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:18.135 [2024-04-15 16:03:47.973251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.135 [2024-04-15 16:03:47.973339] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:18.393  Copying: 32/32 [B] (average 31 kBps) 00:11:18.393 00:11:18.393 ************************************ 00:11:18.393 END TEST dd_flag_append 00:11:18.393 ************************************ 00:11:18.393 16:03:48 -- dd/posix.sh@27 -- # [[ fz6e6ukfzq6ftnr56rmfbossxlxkiu9b0r9uu9cslvnd2mmhpce6f0cij6yqx9xv == \f\z\6\e\6\u\k\f\z\q\6\f\t\n\r\5\6\r\m\f\b\o\s\s\x\l\x\k\i\u\9\b\0\r\9\u\u\9\c\s\l\v\n\d\2\m\m\h\p\c\e\6\f\0\c\i\j\6\y\q\x\9\x\v ]] 00:11:18.393 00:11:18.393 real 0m0.507s 00:11:18.393 user 0m0.249s 00:11:18.393 sys 0m0.257s 00:11:18.393 16:03:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:18.393 16:03:48 -- common/autotest_common.sh@10 -- # set +x 00:11:18.393 16:03:48 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:11:18.393 16:03:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:18.393 16:03:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:18.393 16:03:48 -- common/autotest_common.sh@10 -- # set +x 00:11:18.651 ************************************ 00:11:18.651 START TEST dd_flag_directory 00:11:18.651 ************************************ 00:11:18.651 16:03:48 -- common/autotest_common.sh@1111 -- # directory 00:11:18.651 16:03:48 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:18.651 16:03:48 -- common/autotest_common.sh@638 -- # local es=0 00:11:18.651 16:03:48 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:18.651 16:03:48 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:18.651 16:03:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:18.651 16:03:48 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:18.651 16:03:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:18.651 16:03:48 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:18.651 16:03:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:18.651 16:03:48 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:18.651 16:03:48 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:18.651 16:03:48 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:18.651 [2024-04-15 16:03:48.425478] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:18.651 [2024-04-15 16:03:48.425614] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75454 ] 00:11:18.651 [2024-04-15 16:03:48.581517] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:18.909 [2024-04-15 16:03:48.637313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.909 [2024-04-15 16:03:48.637401] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:18.909 [2024-04-15 16:03:48.709670] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:11:18.909 [2024-04-15 16:03:48.709741] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:11:18.909 [2024-04-15 16:03:48.709764] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:18.909 [2024-04-15 16:03:48.809632] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:19.168 16:03:48 -- common/autotest_common.sh@641 -- # es=236 00:11:19.168 16:03:48 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:19.168 16:03:48 -- common/autotest_common.sh@650 -- # es=108 00:11:19.168 16:03:48 -- common/autotest_common.sh@651 -- # case "$es" in 00:11:19.168 16:03:48 -- common/autotest_common.sh@658 -- # es=1 00:11:19.168 16:03:48 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:19.168 16:03:48 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:11:19.168 16:03:48 -- common/autotest_common.sh@638 -- # local es=0 00:11:19.168 16:03:48 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:11:19.168 16:03:48 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:19.168 16:03:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:19.168 16:03:48 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:19.168 16:03:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:19.168 16:03:48 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:19.168 16:03:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:19.168 16:03:48 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:19.168 16:03:48 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:19.168 16:03:48 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:11:19.168 [2024-04-15 16:03:48.962634] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:19.168 [2024-04-15 16:03:48.962789] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75458 ] 00:11:19.168 [2024-04-15 16:03:49.110947] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.428 [2024-04-15 16:03:49.186469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.428 [2024-04-15 16:03:49.186596] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:19.428 [2024-04-15 16:03:49.273024] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:11:19.428 [2024-04-15 16:03:49.273092] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:11:19.428 [2024-04-15 16:03:49.273115] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:19.428 [2024-04-15 16:03:49.374303] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:19.723 16:03:49 -- common/autotest_common.sh@641 -- # es=236 00:11:19.723 16:03:49 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:19.723 16:03:49 -- common/autotest_common.sh@650 -- # es=108 00:11:19.723 16:03:49 -- common/autotest_common.sh@651 -- # case "$es" in 00:11:19.723 16:03:49 -- common/autotest_common.sh@658 -- # es=1 00:11:19.723 16:03:49 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:19.723 00:11:19.723 real 0m1.100s 00:11:19.723 user 0m0.585s 00:11:19.723 sys 0m0.299s 00:11:19.723 16:03:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:19.723 ************************************ 00:11:19.723 END TEST dd_flag_directory 00:11:19.723 ************************************ 00:11:19.723 16:03:49 -- common/autotest_common.sh@10 -- # set +x 00:11:19.723 16:03:49 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:11:19.723 16:03:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:19.723 16:03:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:19.723 16:03:49 -- common/autotest_common.sh@10 -- # set +x 00:11:19.723 ************************************ 00:11:19.723 START TEST dd_flag_nofollow 00:11:19.723 ************************************ 00:11:19.723 16:03:49 -- common/autotest_common.sh@1111 -- # nofollow 00:11:19.723 16:03:49 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:11:19.723 16:03:49 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:11:19.723 16:03:49 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:11:19.723 16:03:49 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:11:19.723 16:03:49 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:19.723 16:03:49 -- common/autotest_common.sh@638 -- # local es=0 00:11:19.723 16:03:49 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:19.723 16:03:49 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:19.723 16:03:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:19.723 16:03:49 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:19.723 16:03:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:19.723 16:03:49 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:19.723 16:03:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:19.723 16:03:49 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:19.723 16:03:49 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:19.723 16:03:49 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:19.723 [2024-04-15 16:03:49.650916] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:19.723 [2024-04-15 16:03:49.651049] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75497 ] 00:11:20.001 [2024-04-15 16:03:49.792349] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.001 [2024-04-15 16:03:49.851867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.001 [2024-04-15 16:03:49.851945] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:20.001 [2024-04-15 16:03:49.925985] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:11:20.001 [2024-04-15 16:03:49.926046] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:11:20.001 [2024-04-15 16:03:49.926065] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:20.259 [2024-04-15 16:03:50.020492] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:20.259 16:03:50 -- common/autotest_common.sh@641 -- # es=216 00:11:20.259 16:03:50 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:20.259 16:03:50 -- common/autotest_common.sh@650 -- # es=88 00:11:20.259 16:03:50 -- common/autotest_common.sh@651 -- # case "$es" in 00:11:20.259 16:03:50 -- common/autotest_common.sh@658 -- # es=1 00:11:20.259 16:03:50 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:20.260 16:03:50 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:11:20.260 16:03:50 -- common/autotest_common.sh@638 -- # local es=0 00:11:20.260 16:03:50 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:11:20.260 16:03:50 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:20.260 16:03:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:20.260 16:03:50 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:20.260 16:03:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:20.260 16:03:50 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:20.260 16:03:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:20.260 16:03:50 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:20.260 16:03:50 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:20.260 16:03:50 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:11:20.260 [2024-04-15 16:03:50.152026] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:20.260 [2024-04-15 16:03:50.152127] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75506 ] 00:11:20.520 [2024-04-15 16:03:50.293657] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.520 [2024-04-15 16:03:50.343179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.520 [2024-04-15 16:03:50.343255] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:20.520 [2024-04-15 16:03:50.409916] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:11:20.520 [2024-04-15 16:03:50.409972] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:11:20.520 [2024-04-15 16:03:50.409991] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:20.779 [2024-04-15 16:03:50.504116] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:20.779 16:03:50 -- common/autotest_common.sh@641 -- # es=216 00:11:20.779 16:03:50 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:20.779 16:03:50 -- common/autotest_common.sh@650 -- # es=88 00:11:20.779 16:03:50 -- common/autotest_common.sh@651 -- # case "$es" in 00:11:20.779 16:03:50 -- common/autotest_common.sh@658 -- # es=1 00:11:20.779 16:03:50 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:20.779 16:03:50 -- dd/posix.sh@46 -- # gen_bytes 512 00:11:20.779 16:03:50 -- dd/common.sh@98 -- # xtrace_disable 00:11:20.779 16:03:50 -- common/autotest_common.sh@10 -- # set +x 00:11:20.779 16:03:50 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:20.779 [2024-04-15 16:03:50.657145] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:20.779 [2024-04-15 16:03:50.657268] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75514 ] 00:11:21.038 [2024-04-15 16:03:50.802969] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:21.038 [2024-04-15 16:03:50.852106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.038 [2024-04-15 16:03:50.852178] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:21.296  Copying: 512/512 [B] (average 500 kBps) 00:11:21.296 00:11:21.296 16:03:51 -- dd/posix.sh@49 -- # [[ 9b9g3y8aiccsotwf95w6f16ehcsdr1wnthjhpkcfwe8d9q56q6occd5zmqf4sxogzuto80o96bwac5cq07twkyg97cex91r7w0po3vjynqxg86fh06gxg002r27n7uzx5fakllio3vx680cc5o0kdlgyqagw58cluzedjo8fd0cooubk8vimpmj1ex78zmsfwv6dyrdh7gtpngo93576zux6ev4npd9px6blympk47lkipe4wnt0vsjv9lzgt88qsnz5a8fbqogpldeb9nzby22saoru3va9r0n1io3n8l9f4kl6fshh1nf5qxqfetgp6a9lrog9lxsemdsfm3mwde3a5oqp7pd8rya0ltb1gk0t2mjiegt256kpmpr3bfdfaq3dqhwczr1n8fg6d0d6810coo7gsbf1c6dlnopmj8q53v9ln074jmsjvunwrutor88o9h8ls5n6d322pguja6jtibxjwdlk4wi1zmzetzeuwwk1dk7lar9fbiz4om0u == \9\b\9\g\3\y\8\a\i\c\c\s\o\t\w\f\9\5\w\6\f\1\6\e\h\c\s\d\r\1\w\n\t\h\j\h\p\k\c\f\w\e\8\d\9\q\5\6\q\6\o\c\c\d\5\z\m\q\f\4\s\x\o\g\z\u\t\o\8\0\o\9\6\b\w\a\c\5\c\q\0\7\t\w\k\y\g\9\7\c\e\x\9\1\r\7\w\0\p\o\3\v\j\y\n\q\x\g\8\6\f\h\0\6\g\x\g\0\0\2\r\2\7\n\7\u\z\x\5\f\a\k\l\l\i\o\3\v\x\6\8\0\c\c\5\o\0\k\d\l\g\y\q\a\g\w\5\8\c\l\u\z\e\d\j\o\8\f\d\0\c\o\o\u\b\k\8\v\i\m\p\m\j\1\e\x\7\8\z\m\s\f\w\v\6\d\y\r\d\h\7\g\t\p\n\g\o\9\3\5\7\6\z\u\x\6\e\v\4\n\p\d\9\p\x\6\b\l\y\m\p\k\4\7\l\k\i\p\e\4\w\n\t\0\v\s\j\v\9\l\z\g\t\8\8\q\s\n\z\5\a\8\f\b\q\o\g\p\l\d\e\b\9\n\z\b\y\2\2\s\a\o\r\u\3\v\a\9\r\0\n\1\i\o\3\n\8\l\9\f\4\k\l\6\f\s\h\h\1\n\f\5\q\x\q\f\e\t\g\p\6\a\9\l\r\o\g\9\l\x\s\e\m\d\s\f\m\3\m\w\d\e\3\a\5\o\q\p\7\p\d\8\r\y\a\0\l\t\b\1\g\k\0\t\2\m\j\i\e\g\t\2\5\6\k\p\m\p\r\3\b\f\d\f\a\q\3\d\q\h\w\c\z\r\1\n\8\f\g\6\d\0\d\6\8\1\0\c\o\o\7\g\s\b\f\1\c\6\d\l\n\o\p\m\j\8\q\5\3\v\9\l\n\0\7\4\j\m\s\j\v\u\n\w\r\u\t\o\r\8\8\o\9\h\8\l\s\5\n\6\d\3\2\2\p\g\u\j\a\6\j\t\i\b\x\j\w\d\l\k\4\w\i\1\z\m\z\e\t\z\e\u\w\w\k\1\d\k\7\l\a\r\9\f\b\i\z\4\o\m\0\u ]] 00:11:21.296 00:11:21.296 real 0m1.503s 00:11:21.296 user 0m0.780s 00:11:21.296 sys 0m0.506s 00:11:21.296 16:03:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:21.296 ************************************ 00:11:21.296 END TEST dd_flag_nofollow 00:11:21.296 ************************************ 00:11:21.296 16:03:51 -- common/autotest_common.sh@10 -- # set +x 00:11:21.296 16:03:51 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:11:21.296 16:03:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:21.296 16:03:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:21.296 16:03:51 -- common/autotest_common.sh@10 -- # set +x 00:11:21.296 ************************************ 00:11:21.296 START TEST dd_flag_noatime 00:11:21.296 ************************************ 00:11:21.296 16:03:51 -- common/autotest_common.sh@1111 -- # noatime 00:11:21.296 16:03:51 -- dd/posix.sh@53 -- # local atime_if 00:11:21.296 16:03:51 -- dd/posix.sh@54 -- # local atime_of 00:11:21.296 16:03:51 -- dd/posix.sh@58 -- # gen_bytes 512 00:11:21.296 16:03:51 -- dd/common.sh@98 -- # xtrace_disable 00:11:21.296 16:03:51 -- common/autotest_common.sh@10 -- # set +x 00:11:21.296 16:03:51 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:21.296 16:03:51 -- dd/posix.sh@60 -- # atime_if=1713197030 00:11:21.296 16:03:51 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:21.296 16:03:51 -- dd/posix.sh@61 -- # atime_of=1713197031 00:11:21.296 16:03:51 -- dd/posix.sh@66 -- # sleep 1 00:11:22.675 16:03:52 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:22.675 [2024-04-15 16:03:52.280845] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:22.675 [2024-04-15 16:03:52.281396] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75560 ] 00:11:22.675 [2024-04-15 16:03:52.426488] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.675 [2024-04-15 16:03:52.481191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.675 [2024-04-15 16:03:52.481284] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:22.951  Copying: 512/512 [B] (average 500 kBps) 00:11:22.951 00:11:22.951 16:03:52 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:22.951 16:03:52 -- dd/posix.sh@69 -- # (( atime_if == 1713197030 )) 00:11:22.951 16:03:52 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:22.951 16:03:52 -- dd/posix.sh@70 -- # (( atime_of == 1713197031 )) 00:11:22.951 16:03:52 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:22.951 [2024-04-15 16:03:52.800759] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:22.951 [2024-04-15 16:03:52.800865] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75574 ] 00:11:23.213 [2024-04-15 16:03:52.940897] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.213 [2024-04-15 16:03:52.991432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.213 [2024-04-15 16:03:52.991543] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:23.472  Copying: 512/512 [B] (average 500 kBps) 00:11:23.472 00:11:23.472 16:03:53 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:23.472 16:03:53 -- dd/posix.sh@73 -- # (( atime_if < 1713197033 )) 00:11:23.472 00:11:23.472 real 0m2.037s 00:11:23.472 user 0m0.513s 00:11:23.472 sys 0m0.526s 00:11:23.472 16:03:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:23.472 16:03:53 -- common/autotest_common.sh@10 -- # set +x 00:11:23.472 ************************************ 00:11:23.472 END TEST dd_flag_noatime 00:11:23.472 ************************************ 00:11:23.472 16:03:53 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:11:23.472 16:03:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:23.472 16:03:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:23.472 16:03:53 -- common/autotest_common.sh@10 -- # set +x 00:11:23.472 ************************************ 00:11:23.472 START TEST dd_flags_misc 00:11:23.472 ************************************ 00:11:23.472 16:03:53 -- common/autotest_common.sh@1111 -- # io 00:11:23.472 16:03:53 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:11:23.472 16:03:53 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:11:23.472 16:03:53 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:11:23.472 16:03:53 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:11:23.472 16:03:53 -- dd/posix.sh@86 -- # gen_bytes 512 00:11:23.472 16:03:53 -- dd/common.sh@98 -- # xtrace_disable 00:11:23.472 16:03:53 -- common/autotest_common.sh@10 -- # set +x 00:11:23.472 16:03:53 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:23.472 16:03:53 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:11:23.472 [2024-04-15 16:03:53.410989] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:23.472 [2024-04-15 16:03:53.411080] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75606 ] 00:11:23.730 [2024-04-15 16:03:53.550521] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.730 [2024-04-15 16:03:53.600975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.730 [2024-04-15 16:03:53.601048] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:23.988  Copying: 512/512 [B] (average 500 kBps) 00:11:23.988 00:11:23.988 16:03:53 -- dd/posix.sh@93 -- # [[ rb4cuqt5p6zg90n4mitjdt5uz37ormpp6jq2e0rn62n9994hll41l4j9z1tie6of4y53ajrbxr4ijomqziuapuolqxcew3pk0707z3qkf21wnazbnlec4gnqqzp88qe6r3j3c6ppl1fnrelr3yqqoe0nm9c6nug12qgihys6ldw53epw53b8g604640u4o7bhlgvs837sc8jgicay9yzn07vb84bd7ghaiilr0ezogumtts8etdcj45h0yt96sp9ioltygwpvytz3d00b1zvjwsi4hyjywgapmm34dj7h4iw7ss83x8d1lonapm26ir8fuzehyfxzqklxtr7wiadtrzbwvmtcc8xu6amcc137x55mcpgiuf8z5kdjs8be7fxogashvmwfodgftv49hgrlpsa329aktn2k6z6jmeqiglcjp1kww1jad8set9fqh6m6msskfd4kg05fsqwz9z4ass2axado8j2zg0q9iigzhlnckrpc6z4tbhmrbmkphsk == \r\b\4\c\u\q\t\5\p\6\z\g\9\0\n\4\m\i\t\j\d\t\5\u\z\3\7\o\r\m\p\p\6\j\q\2\e\0\r\n\6\2\n\9\9\9\4\h\l\l\4\1\l\4\j\9\z\1\t\i\e\6\o\f\4\y\5\3\a\j\r\b\x\r\4\i\j\o\m\q\z\i\u\a\p\u\o\l\q\x\c\e\w\3\p\k\0\7\0\7\z\3\q\k\f\2\1\w\n\a\z\b\n\l\e\c\4\g\n\q\q\z\p\8\8\q\e\6\r\3\j\3\c\6\p\p\l\1\f\n\r\e\l\r\3\y\q\q\o\e\0\n\m\9\c\6\n\u\g\1\2\q\g\i\h\y\s\6\l\d\w\5\3\e\p\w\5\3\b\8\g\6\0\4\6\4\0\u\4\o\7\b\h\l\g\v\s\8\3\7\s\c\8\j\g\i\c\a\y\9\y\z\n\0\7\v\b\8\4\b\d\7\g\h\a\i\i\l\r\0\e\z\o\g\u\m\t\t\s\8\e\t\d\c\j\4\5\h\0\y\t\9\6\s\p\9\i\o\l\t\y\g\w\p\v\y\t\z\3\d\0\0\b\1\z\v\j\w\s\i\4\h\y\j\y\w\g\a\p\m\m\3\4\d\j\7\h\4\i\w\7\s\s\8\3\x\8\d\1\l\o\n\a\p\m\2\6\i\r\8\f\u\z\e\h\y\f\x\z\q\k\l\x\t\r\7\w\i\a\d\t\r\z\b\w\v\m\t\c\c\8\x\u\6\a\m\c\c\1\3\7\x\5\5\m\c\p\g\i\u\f\8\z\5\k\d\j\s\8\b\e\7\f\x\o\g\a\s\h\v\m\w\f\o\d\g\f\t\v\4\9\h\g\r\l\p\s\a\3\2\9\a\k\t\n\2\k\6\z\6\j\m\e\q\i\g\l\c\j\p\1\k\w\w\1\j\a\d\8\s\e\t\9\f\q\h\6\m\6\m\s\s\k\f\d\4\k\g\0\5\f\s\q\w\z\9\z\4\a\s\s\2\a\x\a\d\o\8\j\2\z\g\0\q\9\i\i\g\z\h\l\n\c\k\r\p\c\6\z\4\t\b\h\m\r\b\m\k\p\h\s\k ]] 00:11:23.988 16:03:53 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:23.988 16:03:53 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:11:23.988 [2024-04-15 16:03:53.885435] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:23.988 [2024-04-15 16:03:53.885541] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75616 ] 00:11:24.247 [2024-04-15 16:03:54.020605] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.247 [2024-04-15 16:03:54.071000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.247 [2024-04-15 16:03:54.071077] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:24.504  Copying: 512/512 [B] (average 500 kBps) 00:11:24.504 00:11:24.504 16:03:54 -- dd/posix.sh@93 -- # [[ rb4cuqt5p6zg90n4mitjdt5uz37ormpp6jq2e0rn62n9994hll41l4j9z1tie6of4y53ajrbxr4ijomqziuapuolqxcew3pk0707z3qkf21wnazbnlec4gnqqzp88qe6r3j3c6ppl1fnrelr3yqqoe0nm9c6nug12qgihys6ldw53epw53b8g604640u4o7bhlgvs837sc8jgicay9yzn07vb84bd7ghaiilr0ezogumtts8etdcj45h0yt96sp9ioltygwpvytz3d00b1zvjwsi4hyjywgapmm34dj7h4iw7ss83x8d1lonapm26ir8fuzehyfxzqklxtr7wiadtrzbwvmtcc8xu6amcc137x55mcpgiuf8z5kdjs8be7fxogashvmwfodgftv49hgrlpsa329aktn2k6z6jmeqiglcjp1kww1jad8set9fqh6m6msskfd4kg05fsqwz9z4ass2axado8j2zg0q9iigzhlnckrpc6z4tbhmrbmkphsk == \r\b\4\c\u\q\t\5\p\6\z\g\9\0\n\4\m\i\t\j\d\t\5\u\z\3\7\o\r\m\p\p\6\j\q\2\e\0\r\n\6\2\n\9\9\9\4\h\l\l\4\1\l\4\j\9\z\1\t\i\e\6\o\f\4\y\5\3\a\j\r\b\x\r\4\i\j\o\m\q\z\i\u\a\p\u\o\l\q\x\c\e\w\3\p\k\0\7\0\7\z\3\q\k\f\2\1\w\n\a\z\b\n\l\e\c\4\g\n\q\q\z\p\8\8\q\e\6\r\3\j\3\c\6\p\p\l\1\f\n\r\e\l\r\3\y\q\q\o\e\0\n\m\9\c\6\n\u\g\1\2\q\g\i\h\y\s\6\l\d\w\5\3\e\p\w\5\3\b\8\g\6\0\4\6\4\0\u\4\o\7\b\h\l\g\v\s\8\3\7\s\c\8\j\g\i\c\a\y\9\y\z\n\0\7\v\b\8\4\b\d\7\g\h\a\i\i\l\r\0\e\z\o\g\u\m\t\t\s\8\e\t\d\c\j\4\5\h\0\y\t\9\6\s\p\9\i\o\l\t\y\g\w\p\v\y\t\z\3\d\0\0\b\1\z\v\j\w\s\i\4\h\y\j\y\w\g\a\p\m\m\3\4\d\j\7\h\4\i\w\7\s\s\8\3\x\8\d\1\l\o\n\a\p\m\2\6\i\r\8\f\u\z\e\h\y\f\x\z\q\k\l\x\t\r\7\w\i\a\d\t\r\z\b\w\v\m\t\c\c\8\x\u\6\a\m\c\c\1\3\7\x\5\5\m\c\p\g\i\u\f\8\z\5\k\d\j\s\8\b\e\7\f\x\o\g\a\s\h\v\m\w\f\o\d\g\f\t\v\4\9\h\g\r\l\p\s\a\3\2\9\a\k\t\n\2\k\6\z\6\j\m\e\q\i\g\l\c\j\p\1\k\w\w\1\j\a\d\8\s\e\t\9\f\q\h\6\m\6\m\s\s\k\f\d\4\k\g\0\5\f\s\q\w\z\9\z\4\a\s\s\2\a\x\a\d\o\8\j\2\z\g\0\q\9\i\i\g\z\h\l\n\c\k\r\p\c\6\z\4\t\b\h\m\r\b\m\k\p\h\s\k ]] 00:11:24.504 16:03:54 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:24.504 16:03:54 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:11:24.504 [2024-04-15 16:03:54.376182] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:24.504 [2024-04-15 16:03:54.376797] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75624 ] 00:11:24.762 [2024-04-15 16:03:54.522299] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.762 [2024-04-15 16:03:54.572290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.762 [2024-04-15 16:03:54.572373] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:25.021  Copying: 512/512 [B] (average 125 kBps) 00:11:25.021 00:11:25.021 16:03:54 -- dd/posix.sh@93 -- # [[ rb4cuqt5p6zg90n4mitjdt5uz37ormpp6jq2e0rn62n9994hll41l4j9z1tie6of4y53ajrbxr4ijomqziuapuolqxcew3pk0707z3qkf21wnazbnlec4gnqqzp88qe6r3j3c6ppl1fnrelr3yqqoe0nm9c6nug12qgihys6ldw53epw53b8g604640u4o7bhlgvs837sc8jgicay9yzn07vb84bd7ghaiilr0ezogumtts8etdcj45h0yt96sp9ioltygwpvytz3d00b1zvjwsi4hyjywgapmm34dj7h4iw7ss83x8d1lonapm26ir8fuzehyfxzqklxtr7wiadtrzbwvmtcc8xu6amcc137x55mcpgiuf8z5kdjs8be7fxogashvmwfodgftv49hgrlpsa329aktn2k6z6jmeqiglcjp1kww1jad8set9fqh6m6msskfd4kg05fsqwz9z4ass2axado8j2zg0q9iigzhlnckrpc6z4tbhmrbmkphsk == \r\b\4\c\u\q\t\5\p\6\z\g\9\0\n\4\m\i\t\j\d\t\5\u\z\3\7\o\r\m\p\p\6\j\q\2\e\0\r\n\6\2\n\9\9\9\4\h\l\l\4\1\l\4\j\9\z\1\t\i\e\6\o\f\4\y\5\3\a\j\r\b\x\r\4\i\j\o\m\q\z\i\u\a\p\u\o\l\q\x\c\e\w\3\p\k\0\7\0\7\z\3\q\k\f\2\1\w\n\a\z\b\n\l\e\c\4\g\n\q\q\z\p\8\8\q\e\6\r\3\j\3\c\6\p\p\l\1\f\n\r\e\l\r\3\y\q\q\o\e\0\n\m\9\c\6\n\u\g\1\2\q\g\i\h\y\s\6\l\d\w\5\3\e\p\w\5\3\b\8\g\6\0\4\6\4\0\u\4\o\7\b\h\l\g\v\s\8\3\7\s\c\8\j\g\i\c\a\y\9\y\z\n\0\7\v\b\8\4\b\d\7\g\h\a\i\i\l\r\0\e\z\o\g\u\m\t\t\s\8\e\t\d\c\j\4\5\h\0\y\t\9\6\s\p\9\i\o\l\t\y\g\w\p\v\y\t\z\3\d\0\0\b\1\z\v\j\w\s\i\4\h\y\j\y\w\g\a\p\m\m\3\4\d\j\7\h\4\i\w\7\s\s\8\3\x\8\d\1\l\o\n\a\p\m\2\6\i\r\8\f\u\z\e\h\y\f\x\z\q\k\l\x\t\r\7\w\i\a\d\t\r\z\b\w\v\m\t\c\c\8\x\u\6\a\m\c\c\1\3\7\x\5\5\m\c\p\g\i\u\f\8\z\5\k\d\j\s\8\b\e\7\f\x\o\g\a\s\h\v\m\w\f\o\d\g\f\t\v\4\9\h\g\r\l\p\s\a\3\2\9\a\k\t\n\2\k\6\z\6\j\m\e\q\i\g\l\c\j\p\1\k\w\w\1\j\a\d\8\s\e\t\9\f\q\h\6\m\6\m\s\s\k\f\d\4\k\g\0\5\f\s\q\w\z\9\z\4\a\s\s\2\a\x\a\d\o\8\j\2\z\g\0\q\9\i\i\g\z\h\l\n\c\k\r\p\c\6\z\4\t\b\h\m\r\b\m\k\p\h\s\k ]] 00:11:25.021 16:03:54 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:25.021 16:03:54 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:11:25.021 [2024-04-15 16:03:54.866397] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:25.021 [2024-04-15 16:03:54.866515] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75635 ] 00:11:25.280 [2024-04-15 16:03:55.010302] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.280 [2024-04-15 16:03:55.059974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.280 [2024-04-15 16:03:55.060047] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:25.538  Copying: 512/512 [B] (average 250 kBps) 00:11:25.538 00:11:25.538 16:03:55 -- dd/posix.sh@93 -- # [[ rb4cuqt5p6zg90n4mitjdt5uz37ormpp6jq2e0rn62n9994hll41l4j9z1tie6of4y53ajrbxr4ijomqziuapuolqxcew3pk0707z3qkf21wnazbnlec4gnqqzp88qe6r3j3c6ppl1fnrelr3yqqoe0nm9c6nug12qgihys6ldw53epw53b8g604640u4o7bhlgvs837sc8jgicay9yzn07vb84bd7ghaiilr0ezogumtts8etdcj45h0yt96sp9ioltygwpvytz3d00b1zvjwsi4hyjywgapmm34dj7h4iw7ss83x8d1lonapm26ir8fuzehyfxzqklxtr7wiadtrzbwvmtcc8xu6amcc137x55mcpgiuf8z5kdjs8be7fxogashvmwfodgftv49hgrlpsa329aktn2k6z6jmeqiglcjp1kww1jad8set9fqh6m6msskfd4kg05fsqwz9z4ass2axado8j2zg0q9iigzhlnckrpc6z4tbhmrbmkphsk == \r\b\4\c\u\q\t\5\p\6\z\g\9\0\n\4\m\i\t\j\d\t\5\u\z\3\7\o\r\m\p\p\6\j\q\2\e\0\r\n\6\2\n\9\9\9\4\h\l\l\4\1\l\4\j\9\z\1\t\i\e\6\o\f\4\y\5\3\a\j\r\b\x\r\4\i\j\o\m\q\z\i\u\a\p\u\o\l\q\x\c\e\w\3\p\k\0\7\0\7\z\3\q\k\f\2\1\w\n\a\z\b\n\l\e\c\4\g\n\q\q\z\p\8\8\q\e\6\r\3\j\3\c\6\p\p\l\1\f\n\r\e\l\r\3\y\q\q\o\e\0\n\m\9\c\6\n\u\g\1\2\q\g\i\h\y\s\6\l\d\w\5\3\e\p\w\5\3\b\8\g\6\0\4\6\4\0\u\4\o\7\b\h\l\g\v\s\8\3\7\s\c\8\j\g\i\c\a\y\9\y\z\n\0\7\v\b\8\4\b\d\7\g\h\a\i\i\l\r\0\e\z\o\g\u\m\t\t\s\8\e\t\d\c\j\4\5\h\0\y\t\9\6\s\p\9\i\o\l\t\y\g\w\p\v\y\t\z\3\d\0\0\b\1\z\v\j\w\s\i\4\h\y\j\y\w\g\a\p\m\m\3\4\d\j\7\h\4\i\w\7\s\s\8\3\x\8\d\1\l\o\n\a\p\m\2\6\i\r\8\f\u\z\e\h\y\f\x\z\q\k\l\x\t\r\7\w\i\a\d\t\r\z\b\w\v\m\t\c\c\8\x\u\6\a\m\c\c\1\3\7\x\5\5\m\c\p\g\i\u\f\8\z\5\k\d\j\s\8\b\e\7\f\x\o\g\a\s\h\v\m\w\f\o\d\g\f\t\v\4\9\h\g\r\l\p\s\a\3\2\9\a\k\t\n\2\k\6\z\6\j\m\e\q\i\g\l\c\j\p\1\k\w\w\1\j\a\d\8\s\e\t\9\f\q\h\6\m\6\m\s\s\k\f\d\4\k\g\0\5\f\s\q\w\z\9\z\4\a\s\s\2\a\x\a\d\o\8\j\2\z\g\0\q\9\i\i\g\z\h\l\n\c\k\r\p\c\6\z\4\t\b\h\m\r\b\m\k\p\h\s\k ]] 00:11:25.538 16:03:55 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:11:25.538 16:03:55 -- dd/posix.sh@86 -- # gen_bytes 512 00:11:25.538 16:03:55 -- dd/common.sh@98 -- # xtrace_disable 00:11:25.538 16:03:55 -- common/autotest_common.sh@10 -- # set +x 00:11:25.538 16:03:55 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:25.538 16:03:55 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:11:25.538 [2024-04-15 16:03:55.368651] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:25.538 [2024-04-15 16:03:55.368769] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75639 ] 00:11:25.848 [2024-04-15 16:03:55.510763] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.848 [2024-04-15 16:03:55.560434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.848 [2024-04-15 16:03:55.560678] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:26.125  Copying: 512/512 [B] (average 500 kBps) 00:11:26.125 00:11:26.125 16:03:55 -- dd/posix.sh@93 -- # [[ gryzp1a7qo1n9mtopamqwbflwi0oxci522f42mw540j0uznnt2bwh1k39by62ny1b74xmdlsxk9d2c7ef1lrvyf50x99kr98v0qlordja0o6d1ges5cdx64gi8dvsh94km7jpx6kx3gg4ll7zevizdqtu4fgq5seiwvzsxcyo8lj8mmrm4pxpbs9do5mmwiaqwhi9qkq3nt0djrutct8ieoiukd1j0gx779vdehyonlbd5txz17bzi8x49kmcqkcbhu4ssm1lxaibxcm7q530qfexbk4ly5tq2c2saa3h9r2l4bqoyitkpv1hjxf458gepv9y5ilhq8antwkavqmab8eqkczewfriqqd7oem9whul1z0lwjas4cc2f9zmcn67ul7wcqbg0ahnypgh7jutr1e5sbw1zehznk0xrkll49yjjofrge1lnxzntxhpto4wyxu9to9zolazqvmwbxlqm0yo0rehhaxx0flkov91ikbgb00havq6d6awcze7of2 == \g\r\y\z\p\1\a\7\q\o\1\n\9\m\t\o\p\a\m\q\w\b\f\l\w\i\0\o\x\c\i\5\2\2\f\4\2\m\w\5\4\0\j\0\u\z\n\n\t\2\b\w\h\1\k\3\9\b\y\6\2\n\y\1\b\7\4\x\m\d\l\s\x\k\9\d\2\c\7\e\f\1\l\r\v\y\f\5\0\x\9\9\k\r\9\8\v\0\q\l\o\r\d\j\a\0\o\6\d\1\g\e\s\5\c\d\x\6\4\g\i\8\d\v\s\h\9\4\k\m\7\j\p\x\6\k\x\3\g\g\4\l\l\7\z\e\v\i\z\d\q\t\u\4\f\g\q\5\s\e\i\w\v\z\s\x\c\y\o\8\l\j\8\m\m\r\m\4\p\x\p\b\s\9\d\o\5\m\m\w\i\a\q\w\h\i\9\q\k\q\3\n\t\0\d\j\r\u\t\c\t\8\i\e\o\i\u\k\d\1\j\0\g\x\7\7\9\v\d\e\h\y\o\n\l\b\d\5\t\x\z\1\7\b\z\i\8\x\4\9\k\m\c\q\k\c\b\h\u\4\s\s\m\1\l\x\a\i\b\x\c\m\7\q\5\3\0\q\f\e\x\b\k\4\l\y\5\t\q\2\c\2\s\a\a\3\h\9\r\2\l\4\b\q\o\y\i\t\k\p\v\1\h\j\x\f\4\5\8\g\e\p\v\9\y\5\i\l\h\q\8\a\n\t\w\k\a\v\q\m\a\b\8\e\q\k\c\z\e\w\f\r\i\q\q\d\7\o\e\m\9\w\h\u\l\1\z\0\l\w\j\a\s\4\c\c\2\f\9\z\m\c\n\6\7\u\l\7\w\c\q\b\g\0\a\h\n\y\p\g\h\7\j\u\t\r\1\e\5\s\b\w\1\z\e\h\z\n\k\0\x\r\k\l\l\4\9\y\j\j\o\f\r\g\e\1\l\n\x\z\n\t\x\h\p\t\o\4\w\y\x\u\9\t\o\9\z\o\l\a\z\q\v\m\w\b\x\l\q\m\0\y\o\0\r\e\h\h\a\x\x\0\f\l\k\o\v\9\1\i\k\b\g\b\0\0\h\a\v\q\6\d\6\a\w\c\z\e\7\o\f\2 ]] 00:11:26.125 16:03:55 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:26.125 16:03:55 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:11:26.125 [2024-04-15 16:03:55.846223] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:26.125 [2024-04-15 16:03:55.846332] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75654 ] 00:11:26.125 [2024-04-15 16:03:55.985884] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:26.125 [2024-04-15 16:03:56.039620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.125 [2024-04-15 16:03:56.039695] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:26.383  Copying: 512/512 [B] (average 500 kBps) 00:11:26.383 00:11:26.383 16:03:56 -- dd/posix.sh@93 -- # [[ gryzp1a7qo1n9mtopamqwbflwi0oxci522f42mw540j0uznnt2bwh1k39by62ny1b74xmdlsxk9d2c7ef1lrvyf50x99kr98v0qlordja0o6d1ges5cdx64gi8dvsh94km7jpx6kx3gg4ll7zevizdqtu4fgq5seiwvzsxcyo8lj8mmrm4pxpbs9do5mmwiaqwhi9qkq3nt0djrutct8ieoiukd1j0gx779vdehyonlbd5txz17bzi8x49kmcqkcbhu4ssm1lxaibxcm7q530qfexbk4ly5tq2c2saa3h9r2l4bqoyitkpv1hjxf458gepv9y5ilhq8antwkavqmab8eqkczewfriqqd7oem9whul1z0lwjas4cc2f9zmcn67ul7wcqbg0ahnypgh7jutr1e5sbw1zehznk0xrkll49yjjofrge1lnxzntxhpto4wyxu9to9zolazqvmwbxlqm0yo0rehhaxx0flkov91ikbgb00havq6d6awcze7of2 == \g\r\y\z\p\1\a\7\q\o\1\n\9\m\t\o\p\a\m\q\w\b\f\l\w\i\0\o\x\c\i\5\2\2\f\4\2\m\w\5\4\0\j\0\u\z\n\n\t\2\b\w\h\1\k\3\9\b\y\6\2\n\y\1\b\7\4\x\m\d\l\s\x\k\9\d\2\c\7\e\f\1\l\r\v\y\f\5\0\x\9\9\k\r\9\8\v\0\q\l\o\r\d\j\a\0\o\6\d\1\g\e\s\5\c\d\x\6\4\g\i\8\d\v\s\h\9\4\k\m\7\j\p\x\6\k\x\3\g\g\4\l\l\7\z\e\v\i\z\d\q\t\u\4\f\g\q\5\s\e\i\w\v\z\s\x\c\y\o\8\l\j\8\m\m\r\m\4\p\x\p\b\s\9\d\o\5\m\m\w\i\a\q\w\h\i\9\q\k\q\3\n\t\0\d\j\r\u\t\c\t\8\i\e\o\i\u\k\d\1\j\0\g\x\7\7\9\v\d\e\h\y\o\n\l\b\d\5\t\x\z\1\7\b\z\i\8\x\4\9\k\m\c\q\k\c\b\h\u\4\s\s\m\1\l\x\a\i\b\x\c\m\7\q\5\3\0\q\f\e\x\b\k\4\l\y\5\t\q\2\c\2\s\a\a\3\h\9\r\2\l\4\b\q\o\y\i\t\k\p\v\1\h\j\x\f\4\5\8\g\e\p\v\9\y\5\i\l\h\q\8\a\n\t\w\k\a\v\q\m\a\b\8\e\q\k\c\z\e\w\f\r\i\q\q\d\7\o\e\m\9\w\h\u\l\1\z\0\l\w\j\a\s\4\c\c\2\f\9\z\m\c\n\6\7\u\l\7\w\c\q\b\g\0\a\h\n\y\p\g\h\7\j\u\t\r\1\e\5\s\b\w\1\z\e\h\z\n\k\0\x\r\k\l\l\4\9\y\j\j\o\f\r\g\e\1\l\n\x\z\n\t\x\h\p\t\o\4\w\y\x\u\9\t\o\9\z\o\l\a\z\q\v\m\w\b\x\l\q\m\0\y\o\0\r\e\h\h\a\x\x\0\f\l\k\o\v\9\1\i\k\b\g\b\0\0\h\a\v\q\6\d\6\a\w\c\z\e\7\o\f\2 ]] 00:11:26.383 16:03:56 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:26.383 16:03:56 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:11:26.383 [2024-04-15 16:03:56.328239] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:26.383 [2024-04-15 16:03:56.328339] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75658 ] 00:11:26.642 [2024-04-15 16:03:56.467186] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:26.642 [2024-04-15 16:03:56.517605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.642 [2024-04-15 16:03:56.517818] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:26.901  Copying: 512/512 [B] (average 500 kBps) 00:11:26.901 00:11:26.901 16:03:56 -- dd/posix.sh@93 -- # [[ gryzp1a7qo1n9mtopamqwbflwi0oxci522f42mw540j0uznnt2bwh1k39by62ny1b74xmdlsxk9d2c7ef1lrvyf50x99kr98v0qlordja0o6d1ges5cdx64gi8dvsh94km7jpx6kx3gg4ll7zevizdqtu4fgq5seiwvzsxcyo8lj8mmrm4pxpbs9do5mmwiaqwhi9qkq3nt0djrutct8ieoiukd1j0gx779vdehyonlbd5txz17bzi8x49kmcqkcbhu4ssm1lxaibxcm7q530qfexbk4ly5tq2c2saa3h9r2l4bqoyitkpv1hjxf458gepv9y5ilhq8antwkavqmab8eqkczewfriqqd7oem9whul1z0lwjas4cc2f9zmcn67ul7wcqbg0ahnypgh7jutr1e5sbw1zehznk0xrkll49yjjofrge1lnxzntxhpto4wyxu9to9zolazqvmwbxlqm0yo0rehhaxx0flkov91ikbgb00havq6d6awcze7of2 == \g\r\y\z\p\1\a\7\q\o\1\n\9\m\t\o\p\a\m\q\w\b\f\l\w\i\0\o\x\c\i\5\2\2\f\4\2\m\w\5\4\0\j\0\u\z\n\n\t\2\b\w\h\1\k\3\9\b\y\6\2\n\y\1\b\7\4\x\m\d\l\s\x\k\9\d\2\c\7\e\f\1\l\r\v\y\f\5\0\x\9\9\k\r\9\8\v\0\q\l\o\r\d\j\a\0\o\6\d\1\g\e\s\5\c\d\x\6\4\g\i\8\d\v\s\h\9\4\k\m\7\j\p\x\6\k\x\3\g\g\4\l\l\7\z\e\v\i\z\d\q\t\u\4\f\g\q\5\s\e\i\w\v\z\s\x\c\y\o\8\l\j\8\m\m\r\m\4\p\x\p\b\s\9\d\o\5\m\m\w\i\a\q\w\h\i\9\q\k\q\3\n\t\0\d\j\r\u\t\c\t\8\i\e\o\i\u\k\d\1\j\0\g\x\7\7\9\v\d\e\h\y\o\n\l\b\d\5\t\x\z\1\7\b\z\i\8\x\4\9\k\m\c\q\k\c\b\h\u\4\s\s\m\1\l\x\a\i\b\x\c\m\7\q\5\3\0\q\f\e\x\b\k\4\l\y\5\t\q\2\c\2\s\a\a\3\h\9\r\2\l\4\b\q\o\y\i\t\k\p\v\1\h\j\x\f\4\5\8\g\e\p\v\9\y\5\i\l\h\q\8\a\n\t\w\k\a\v\q\m\a\b\8\e\q\k\c\z\e\w\f\r\i\q\q\d\7\o\e\m\9\w\h\u\l\1\z\0\l\w\j\a\s\4\c\c\2\f\9\z\m\c\n\6\7\u\l\7\w\c\q\b\g\0\a\h\n\y\p\g\h\7\j\u\t\r\1\e\5\s\b\w\1\z\e\h\z\n\k\0\x\r\k\l\l\4\9\y\j\j\o\f\r\g\e\1\l\n\x\z\n\t\x\h\p\t\o\4\w\y\x\u\9\t\o\9\z\o\l\a\z\q\v\m\w\b\x\l\q\m\0\y\o\0\r\e\h\h\a\x\x\0\f\l\k\o\v\9\1\i\k\b\g\b\0\0\h\a\v\q\6\d\6\a\w\c\z\e\7\o\f\2 ]] 00:11:26.901 16:03:56 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:26.901 16:03:56 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:11:26.901 [2024-04-15 16:03:56.818518] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:26.901 [2024-04-15 16:03:56.818634] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75673 ] 00:11:27.159 [2024-04-15 16:03:56.965905] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.159 [2024-04-15 16:03:57.019635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.159 [2024-04-15 16:03:57.019730] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:27.418  Copying: 512/512 [B] (average 125 kBps) 00:11:27.418 00:11:27.418 16:03:57 -- dd/posix.sh@93 -- # [[ gryzp1a7qo1n9mtopamqwbflwi0oxci522f42mw540j0uznnt2bwh1k39by62ny1b74xmdlsxk9d2c7ef1lrvyf50x99kr98v0qlordja0o6d1ges5cdx64gi8dvsh94km7jpx6kx3gg4ll7zevizdqtu4fgq5seiwvzsxcyo8lj8mmrm4pxpbs9do5mmwiaqwhi9qkq3nt0djrutct8ieoiukd1j0gx779vdehyonlbd5txz17bzi8x49kmcqkcbhu4ssm1lxaibxcm7q530qfexbk4ly5tq2c2saa3h9r2l4bqoyitkpv1hjxf458gepv9y5ilhq8antwkavqmab8eqkczewfriqqd7oem9whul1z0lwjas4cc2f9zmcn67ul7wcqbg0ahnypgh7jutr1e5sbw1zehznk0xrkll49yjjofrge1lnxzntxhpto4wyxu9to9zolazqvmwbxlqm0yo0rehhaxx0flkov91ikbgb00havq6d6awcze7of2 == \g\r\y\z\p\1\a\7\q\o\1\n\9\m\t\o\p\a\m\q\w\b\f\l\w\i\0\o\x\c\i\5\2\2\f\4\2\m\w\5\4\0\j\0\u\z\n\n\t\2\b\w\h\1\k\3\9\b\y\6\2\n\y\1\b\7\4\x\m\d\l\s\x\k\9\d\2\c\7\e\f\1\l\r\v\y\f\5\0\x\9\9\k\r\9\8\v\0\q\l\o\r\d\j\a\0\o\6\d\1\g\e\s\5\c\d\x\6\4\g\i\8\d\v\s\h\9\4\k\m\7\j\p\x\6\k\x\3\g\g\4\l\l\7\z\e\v\i\z\d\q\t\u\4\f\g\q\5\s\e\i\w\v\z\s\x\c\y\o\8\l\j\8\m\m\r\m\4\p\x\p\b\s\9\d\o\5\m\m\w\i\a\q\w\h\i\9\q\k\q\3\n\t\0\d\j\r\u\t\c\t\8\i\e\o\i\u\k\d\1\j\0\g\x\7\7\9\v\d\e\h\y\o\n\l\b\d\5\t\x\z\1\7\b\z\i\8\x\4\9\k\m\c\q\k\c\b\h\u\4\s\s\m\1\l\x\a\i\b\x\c\m\7\q\5\3\0\q\f\e\x\b\k\4\l\y\5\t\q\2\c\2\s\a\a\3\h\9\r\2\l\4\b\q\o\y\i\t\k\p\v\1\h\j\x\f\4\5\8\g\e\p\v\9\y\5\i\l\h\q\8\a\n\t\w\k\a\v\q\m\a\b\8\e\q\k\c\z\e\w\f\r\i\q\q\d\7\o\e\m\9\w\h\u\l\1\z\0\l\w\j\a\s\4\c\c\2\f\9\z\m\c\n\6\7\u\l\7\w\c\q\b\g\0\a\h\n\y\p\g\h\7\j\u\t\r\1\e\5\s\b\w\1\z\e\h\z\n\k\0\x\r\k\l\l\4\9\y\j\j\o\f\r\g\e\1\l\n\x\z\n\t\x\h\p\t\o\4\w\y\x\u\9\t\o\9\z\o\l\a\z\q\v\m\w\b\x\l\q\m\0\y\o\0\r\e\h\h\a\x\x\0\f\l\k\o\v\9\1\i\k\b\g\b\0\0\h\a\v\q\6\d\6\a\w\c\z\e\7\o\f\2 ]] 00:11:27.418 00:11:27.418 real 0m3.915s 00:11:27.418 user 0m2.009s 00:11:27.418 sys 0m1.900s 00:11:27.418 16:03:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:27.418 ************************************ 00:11:27.418 END TEST dd_flags_misc 00:11:27.418 ************************************ 00:11:27.418 16:03:57 -- common/autotest_common.sh@10 -- # set +x 00:11:27.418 16:03:57 -- dd/posix.sh@131 -- # tests_forced_aio 00:11:27.418 16:03:57 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:11:27.418 * Second test run, disabling liburing, forcing AIO 00:11:27.418 16:03:57 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:11:27.418 16:03:57 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:11:27.418 16:03:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:27.418 16:03:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:27.418 16:03:57 -- common/autotest_common.sh@10 -- # set +x 00:11:27.677 ************************************ 00:11:27.677 START TEST dd_flag_append_forced_aio 00:11:27.677 ************************************ 00:11:27.677 16:03:57 -- common/autotest_common.sh@1111 -- # append 00:11:27.677 16:03:57 -- dd/posix.sh@16 -- # local dump0 00:11:27.677 16:03:57 -- dd/posix.sh@17 -- # local dump1 00:11:27.677 16:03:57 -- dd/posix.sh@19 -- # gen_bytes 32 00:11:27.677 16:03:57 -- dd/common.sh@98 -- # xtrace_disable 00:11:27.677 16:03:57 -- common/autotest_common.sh@10 -- # set +x 00:11:27.677 16:03:57 -- dd/posix.sh@19 -- # dump0=koupgqwxunm6gk9fenlt0qm41vnmwkh5 00:11:27.677 16:03:57 -- dd/posix.sh@20 -- # gen_bytes 32 00:11:27.677 16:03:57 -- dd/common.sh@98 -- # xtrace_disable 00:11:27.677 16:03:57 -- common/autotest_common.sh@10 -- # set +x 00:11:27.677 16:03:57 -- dd/posix.sh@20 -- # dump1=zar1o90fyzvlijnadj0kifismp0jafne 00:11:27.677 16:03:57 -- dd/posix.sh@22 -- # printf %s koupgqwxunm6gk9fenlt0qm41vnmwkh5 00:11:27.677 16:03:57 -- dd/posix.sh@23 -- # printf %s zar1o90fyzvlijnadj0kifismp0jafne 00:11:27.677 16:03:57 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:11:27.677 [2024-04-15 16:03:57.470715] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:27.677 [2024-04-15 16:03:57.470812] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75700 ] 00:11:27.677 [2024-04-15 16:03:57.620040] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.935 [2024-04-15 16:03:57.673584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.935 [2024-04-15 16:03:57.673670] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:28.193  Copying: 32/32 [B] (average 31 kBps) 00:11:28.193 00:11:28.193 16:03:57 -- dd/posix.sh@27 -- # [[ zar1o90fyzvlijnadj0kifismp0jafnekoupgqwxunm6gk9fenlt0qm41vnmwkh5 == \z\a\r\1\o\9\0\f\y\z\v\l\i\j\n\a\d\j\0\k\i\f\i\s\m\p\0\j\a\f\n\e\k\o\u\p\g\q\w\x\u\n\m\6\g\k\9\f\e\n\l\t\0\q\m\4\1\v\n\m\w\k\h\5 ]] 00:11:28.193 00:11:28.193 real 0m0.539s 00:11:28.193 user 0m0.271s 00:11:28.193 sys 0m0.146s 00:11:28.193 16:03:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:28.193 16:03:57 -- common/autotest_common.sh@10 -- # set +x 00:11:28.193 ************************************ 00:11:28.193 END TEST dd_flag_append_forced_aio 00:11:28.193 ************************************ 00:11:28.193 16:03:57 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:11:28.193 16:03:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:28.193 16:03:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:28.193 16:03:57 -- common/autotest_common.sh@10 -- # set +x 00:11:28.193 ************************************ 00:11:28.193 START TEST dd_flag_directory_forced_aio 00:11:28.193 ************************************ 00:11:28.193 16:03:58 -- common/autotest_common.sh@1111 -- # directory 00:11:28.193 16:03:58 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:28.193 16:03:58 -- common/autotest_common.sh@638 -- # local es=0 00:11:28.193 16:03:58 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:28.193 16:03:58 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:28.193 16:03:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:28.193 16:03:58 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:28.193 16:03:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:28.193 16:03:58 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:28.193 16:03:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:28.193 16:03:58 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:28.193 16:03:58 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:28.193 16:03:58 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:28.193 [2024-04-15 16:03:58.153329] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:28.193 [2024-04-15 16:03:58.153742] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75737 ] 00:11:28.451 [2024-04-15 16:03:58.301926] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.451 [2024-04-15 16:03:58.358179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.451 [2024-04-15 16:03:58.358262] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:28.710 [2024-04-15 16:03:58.434152] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:11:28.710 [2024-04-15 16:03:58.434221] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:11:28.710 [2024-04-15 16:03:58.434248] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:28.710 [2024-04-15 16:03:58.535752] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:28.710 16:03:58 -- common/autotest_common.sh@641 -- # es=236 00:11:28.710 16:03:58 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:28.710 16:03:58 -- common/autotest_common.sh@650 -- # es=108 00:11:28.710 16:03:58 -- common/autotest_common.sh@651 -- # case "$es" in 00:11:28.710 16:03:58 -- common/autotest_common.sh@658 -- # es=1 00:11:28.710 16:03:58 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:28.710 16:03:58 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:11:28.710 16:03:58 -- common/autotest_common.sh@638 -- # local es=0 00:11:28.710 16:03:58 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:11:28.710 16:03:58 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:28.710 16:03:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:28.710 16:03:58 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:28.710 16:03:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:28.710 16:03:58 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:28.710 16:03:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:28.710 16:03:58 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:28.710 16:03:58 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:28.710 16:03:58 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:11:28.969 [2024-04-15 16:03:58.695776] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:28.969 [2024-04-15 16:03:58.695892] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75752 ] 00:11:28.969 [2024-04-15 16:03:58.844235] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.969 [2024-04-15 16:03:58.914183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.969 [2024-04-15 16:03:58.914298] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:29.227 [2024-04-15 16:03:58.986962] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:11:29.227 [2024-04-15 16:03:58.987022] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:11:29.227 [2024-04-15 16:03:58.987046] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:29.227 [2024-04-15 16:03:59.084763] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:29.227 16:03:59 -- common/autotest_common.sh@641 -- # es=236 00:11:29.227 16:03:59 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:29.227 16:03:59 -- common/autotest_common.sh@650 -- # es=108 00:11:29.227 16:03:59 -- common/autotest_common.sh@651 -- # case "$es" in 00:11:29.227 16:03:59 -- common/autotest_common.sh@658 -- # es=1 00:11:29.227 16:03:59 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:29.227 00:11:29.227 real 0m1.093s 00:11:29.227 user 0m0.574s 00:11:29.227 sys 0m0.303s 00:11:29.227 16:03:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:29.227 16:03:59 -- common/autotest_common.sh@10 -- # set +x 00:11:29.227 ************************************ 00:11:29.227 END TEST dd_flag_directory_forced_aio 00:11:29.227 ************************************ 00:11:29.486 16:03:59 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:11:29.486 16:03:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:29.486 16:03:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:29.486 16:03:59 -- common/autotest_common.sh@10 -- # set +x 00:11:29.486 ************************************ 00:11:29.486 START TEST dd_flag_nofollow_forced_aio 00:11:29.486 ************************************ 00:11:29.486 16:03:59 -- common/autotest_common.sh@1111 -- # nofollow 00:11:29.486 16:03:59 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:11:29.486 16:03:59 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:11:29.486 16:03:59 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:11:29.486 16:03:59 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:11:29.486 16:03:59 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:29.486 16:03:59 -- common/autotest_common.sh@638 -- # local es=0 00:11:29.486 16:03:59 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:29.486 16:03:59 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:29.486 16:03:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:29.486 16:03:59 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:29.486 16:03:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:29.486 16:03:59 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:29.486 16:03:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:29.486 16:03:59 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:29.486 16:03:59 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:29.486 16:03:59 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:29.486 [2024-04-15 16:03:59.379304] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:29.486 [2024-04-15 16:03:59.379427] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75779 ] 00:11:29.746 [2024-04-15 16:03:59.528296] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.746 [2024-04-15 16:03:59.586478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.746 [2024-04-15 16:03:59.586564] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:29.746 [2024-04-15 16:03:59.660806] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:11:29.746 [2024-04-15 16:03:59.660859] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:11:29.746 [2024-04-15 16:03:59.660885] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:30.004 [2024-04-15 16:03:59.758382] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:30.004 16:03:59 -- common/autotest_common.sh@641 -- # es=216 00:11:30.004 16:03:59 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:30.004 16:03:59 -- common/autotest_common.sh@650 -- # es=88 00:11:30.004 16:03:59 -- common/autotest_common.sh@651 -- # case "$es" in 00:11:30.004 16:03:59 -- common/autotest_common.sh@658 -- # es=1 00:11:30.004 16:03:59 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:30.004 16:03:59 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:11:30.004 16:03:59 -- common/autotest_common.sh@638 -- # local es=0 00:11:30.004 16:03:59 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:11:30.004 16:03:59 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:30.004 16:03:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:30.004 16:03:59 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:30.004 16:03:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:30.004 16:03:59 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:30.004 16:03:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:30.004 16:03:59 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:30.004 16:03:59 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:30.004 16:03:59 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:11:30.004 [2024-04-15 16:03:59.901890] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:30.004 [2024-04-15 16:03:59.902010] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75794 ] 00:11:30.262 [2024-04-15 16:04:00.048412] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.262 [2024-04-15 16:04:00.103457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.262 [2024-04-15 16:04:00.103546] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:30.262 [2024-04-15 16:04:00.177392] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:11:30.262 [2024-04-15 16:04:00.177465] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:11:30.262 [2024-04-15 16:04:00.177492] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:30.521 [2024-04-15 16:04:00.273047] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:30.521 16:04:00 -- common/autotest_common.sh@641 -- # es=216 00:11:30.521 16:04:00 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:30.521 16:04:00 -- common/autotest_common.sh@650 -- # es=88 00:11:30.521 16:04:00 -- common/autotest_common.sh@651 -- # case "$es" in 00:11:30.521 16:04:00 -- common/autotest_common.sh@658 -- # es=1 00:11:30.521 16:04:00 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:30.521 16:04:00 -- dd/posix.sh@46 -- # gen_bytes 512 00:11:30.521 16:04:00 -- dd/common.sh@98 -- # xtrace_disable 00:11:30.521 16:04:00 -- common/autotest_common.sh@10 -- # set +x 00:11:30.521 16:04:00 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:30.521 [2024-04-15 16:04:00.416632] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:30.521 [2024-04-15 16:04:00.416743] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75796 ] 00:11:30.780 [2024-04-15 16:04:00.564214] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.780 [2024-04-15 16:04:00.619341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.780 [2024-04-15 16:04:00.619431] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:31.038  Copying: 512/512 [B] (average 500 kBps) 00:11:31.038 00:11:31.038 16:04:00 -- dd/posix.sh@49 -- # [[ 04yj6e7l2fljs700qdzhin6ymun7amfa2cuw4m60volxs3swr9ay2sq8wbl54nfmsidx2ck6w05x9oo8msfdua0yh0k94nxt53bai76r1aw3wv4d47taibsfyz8ib2uj8qif193sqplmwlvihpenjshl2zf551k8fax8tcb5m7v2ppoyjhmprthpjaamnnfjjbnyursfsqv1svl2lpz5rsd7oadtnb1c80x38bgfsugvo61qko3lxczvil1x7f7ow5yl35utyzrur7sh66tt9beudkj2sf3drlzs7ydofgg4ftw8mnvy3o7vhogyq3qa5b8rqaiuls3pqt6bguuzgy8k1wsim3i2lrojwssur8joanv8y9a7c4xyvuzzpiol60crjrcn6slazgdwac9ujt0jac1zfixi94dd2oc4victa8low6o2sm3cgddcd8349mmwcr3cr4g9ug5bxssn5j6rd9sjqv7u8npt3glj21zxdk9u3k8qqtys964fp4sn == \0\4\y\j\6\e\7\l\2\f\l\j\s\7\0\0\q\d\z\h\i\n\6\y\m\u\n\7\a\m\f\a\2\c\u\w\4\m\6\0\v\o\l\x\s\3\s\w\r\9\a\y\2\s\q\8\w\b\l\5\4\n\f\m\s\i\d\x\2\c\k\6\w\0\5\x\9\o\o\8\m\s\f\d\u\a\0\y\h\0\k\9\4\n\x\t\5\3\b\a\i\7\6\r\1\a\w\3\w\v\4\d\4\7\t\a\i\b\s\f\y\z\8\i\b\2\u\j\8\q\i\f\1\9\3\s\q\p\l\m\w\l\v\i\h\p\e\n\j\s\h\l\2\z\f\5\5\1\k\8\f\a\x\8\t\c\b\5\m\7\v\2\p\p\o\y\j\h\m\p\r\t\h\p\j\a\a\m\n\n\f\j\j\b\n\y\u\r\s\f\s\q\v\1\s\v\l\2\l\p\z\5\r\s\d\7\o\a\d\t\n\b\1\c\8\0\x\3\8\b\g\f\s\u\g\v\o\6\1\q\k\o\3\l\x\c\z\v\i\l\1\x\7\f\7\o\w\5\y\l\3\5\u\t\y\z\r\u\r\7\s\h\6\6\t\t\9\b\e\u\d\k\j\2\s\f\3\d\r\l\z\s\7\y\d\o\f\g\g\4\f\t\w\8\m\n\v\y\3\o\7\v\h\o\g\y\q\3\q\a\5\b\8\r\q\a\i\u\l\s\3\p\q\t\6\b\g\u\u\z\g\y\8\k\1\w\s\i\m\3\i\2\l\r\o\j\w\s\s\u\r\8\j\o\a\n\v\8\y\9\a\7\c\4\x\y\v\u\z\z\p\i\o\l\6\0\c\r\j\r\c\n\6\s\l\a\z\g\d\w\a\c\9\u\j\t\0\j\a\c\1\z\f\i\x\i\9\4\d\d\2\o\c\4\v\i\c\t\a\8\l\o\w\6\o\2\s\m\3\c\g\d\d\c\d\8\3\4\9\m\m\w\c\r\3\c\r\4\g\9\u\g\5\b\x\s\s\n\5\j\6\r\d\9\s\j\q\v\7\u\8\n\p\t\3\g\l\j\2\1\z\x\d\k\9\u\3\k\8\q\q\t\y\s\9\6\4\f\p\4\s\n ]] 00:11:31.038 00:11:31.038 real 0m1.586s 00:11:31.038 user 0m0.815s 00:11:31.038 sys 0m0.435s 00:11:31.038 16:04:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:31.038 ************************************ 00:11:31.038 END TEST dd_flag_nofollow_forced_aio 00:11:31.038 ************************************ 00:11:31.038 16:04:00 -- common/autotest_common.sh@10 -- # set +x 00:11:31.038 16:04:00 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:11:31.038 16:04:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:31.038 16:04:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:31.038 16:04:00 -- common/autotest_common.sh@10 -- # set +x 00:11:31.295 ************************************ 00:11:31.296 START TEST dd_flag_noatime_forced_aio 00:11:31.296 ************************************ 00:11:31.296 16:04:01 -- common/autotest_common.sh@1111 -- # noatime 00:11:31.296 16:04:01 -- dd/posix.sh@53 -- # local atime_if 00:11:31.296 16:04:01 -- dd/posix.sh@54 -- # local atime_of 00:11:31.296 16:04:01 -- dd/posix.sh@58 -- # gen_bytes 512 00:11:31.296 16:04:01 -- dd/common.sh@98 -- # xtrace_disable 00:11:31.296 16:04:01 -- common/autotest_common.sh@10 -- # set +x 00:11:31.296 16:04:01 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:31.296 16:04:01 -- dd/posix.sh@60 -- # atime_if=1713197040 00:11:31.296 16:04:01 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:31.296 16:04:01 -- dd/posix.sh@61 -- # atime_of=1713197040 00:11:31.296 16:04:01 -- dd/posix.sh@66 -- # sleep 1 00:11:32.228 16:04:02 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:32.228 [2024-04-15 16:04:02.111306] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:32.228 [2024-04-15 16:04:02.111414] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75846 ] 00:11:32.485 [2024-04-15 16:04:02.256566] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.485 [2024-04-15 16:04:02.309996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.485 [2024-04-15 16:04:02.310077] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:32.744  Copying: 512/512 [B] (average 500 kBps) 00:11:32.744 00:11:32.744 16:04:02 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:32.744 16:04:02 -- dd/posix.sh@69 -- # (( atime_if == 1713197040 )) 00:11:32.744 16:04:02 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:32.744 16:04:02 -- dd/posix.sh@70 -- # (( atime_of == 1713197040 )) 00:11:32.744 16:04:02 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:32.744 [2024-04-15 16:04:02.653341] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:32.744 [2024-04-15 16:04:02.653461] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75852 ] 00:11:33.002 [2024-04-15 16:04:02.797446] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:33.002 [2024-04-15 16:04:02.850486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.002 [2024-04-15 16:04:02.850562] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:33.261  Copying: 512/512 [B] (average 500 kBps) 00:11:33.261 00:11:33.261 16:04:03 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:33.261 16:04:03 -- dd/posix.sh@73 -- # (( atime_if < 1713197042 )) 00:11:33.261 00:11:33.261 real 0m2.098s 00:11:33.261 user 0m0.562s 00:11:33.261 sys 0m0.293s 00:11:33.261 16:04:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:33.261 16:04:03 -- common/autotest_common.sh@10 -- # set +x 00:11:33.261 ************************************ 00:11:33.261 END TEST dd_flag_noatime_forced_aio 00:11:33.261 ************************************ 00:11:33.261 16:04:03 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:11:33.261 16:04:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:33.261 16:04:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:33.261 16:04:03 -- common/autotest_common.sh@10 -- # set +x 00:11:33.519 ************************************ 00:11:33.519 START TEST dd_flags_misc_forced_aio 00:11:33.519 ************************************ 00:11:33.519 16:04:03 -- common/autotest_common.sh@1111 -- # io 00:11:33.519 16:04:03 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:11:33.519 16:04:03 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:11:33.519 16:04:03 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:11:33.519 16:04:03 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:11:33.519 16:04:03 -- dd/posix.sh@86 -- # gen_bytes 512 00:11:33.519 16:04:03 -- dd/common.sh@98 -- # xtrace_disable 00:11:33.519 16:04:03 -- common/autotest_common.sh@10 -- # set +x 00:11:33.519 16:04:03 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:33.519 16:04:03 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:11:33.519 [2024-04-15 16:04:03.317460] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:33.519 [2024-04-15 16:04:03.317550] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75888 ] 00:11:33.519 [2024-04-15 16:04:03.454222] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:33.812 [2024-04-15 16:04:03.508529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.812 [2024-04-15 16:04:03.508630] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:34.073  Copying: 512/512 [B] (average 500 kBps) 00:11:34.073 00:11:34.073 16:04:03 -- dd/posix.sh@93 -- # [[ 95qqkz712q0nm7kbruxnaidq83q32eweetkputpevrteexqy3uu364srcfzk997yxdnbfmsy3zypmnr9t6vwojcbxm81iewqb4vzpuol85vtrup802hgt5b97lzh8kh3epegni29r4muyndfj78p4mpsl49smifpd55x9s31n2dww7tnlxcgeh0uhc26l0tlo0o5yot5l0slvjuzgqgys3nivp0sc6ev70kzyycd8wksf7qbhg05d0gezhlet3xm7nvxiatfqkpmgtdwtbwjvmy7lk2y46r3b5ghoxdu6zn2oc233jv9rugijy04psldzab4kdm3mlg2e45cncxwrp97n0tlwhtsrjt46ki9r0m83m9b1odtvmobi8kyo1rv0lfxwh5glfyezl5qvzfvzg2z1j8bloq69sfgtzwns7o2m30s8g7t3l3p5jseoanj73jao1joni1s90o7lg7bzrfpexvc7tf676juexh5xurpdkdz19rfh3do0io8c4ad == \9\5\q\q\k\z\7\1\2\q\0\n\m\7\k\b\r\u\x\n\a\i\d\q\8\3\q\3\2\e\w\e\e\t\k\p\u\t\p\e\v\r\t\e\e\x\q\y\3\u\u\3\6\4\s\r\c\f\z\k\9\9\7\y\x\d\n\b\f\m\s\y\3\z\y\p\m\n\r\9\t\6\v\w\o\j\c\b\x\m\8\1\i\e\w\q\b\4\v\z\p\u\o\l\8\5\v\t\r\u\p\8\0\2\h\g\t\5\b\9\7\l\z\h\8\k\h\3\e\p\e\g\n\i\2\9\r\4\m\u\y\n\d\f\j\7\8\p\4\m\p\s\l\4\9\s\m\i\f\p\d\5\5\x\9\s\3\1\n\2\d\w\w\7\t\n\l\x\c\g\e\h\0\u\h\c\2\6\l\0\t\l\o\0\o\5\y\o\t\5\l\0\s\l\v\j\u\z\g\q\g\y\s\3\n\i\v\p\0\s\c\6\e\v\7\0\k\z\y\y\c\d\8\w\k\s\f\7\q\b\h\g\0\5\d\0\g\e\z\h\l\e\t\3\x\m\7\n\v\x\i\a\t\f\q\k\p\m\g\t\d\w\t\b\w\j\v\m\y\7\l\k\2\y\4\6\r\3\b\5\g\h\o\x\d\u\6\z\n\2\o\c\2\3\3\j\v\9\r\u\g\i\j\y\0\4\p\s\l\d\z\a\b\4\k\d\m\3\m\l\g\2\e\4\5\c\n\c\x\w\r\p\9\7\n\0\t\l\w\h\t\s\r\j\t\4\6\k\i\9\r\0\m\8\3\m\9\b\1\o\d\t\v\m\o\b\i\8\k\y\o\1\r\v\0\l\f\x\w\h\5\g\l\f\y\e\z\l\5\q\v\z\f\v\z\g\2\z\1\j\8\b\l\o\q\6\9\s\f\g\t\z\w\n\s\7\o\2\m\3\0\s\8\g\7\t\3\l\3\p\5\j\s\e\o\a\n\j\7\3\j\a\o\1\j\o\n\i\1\s\9\0\o\7\l\g\7\b\z\r\f\p\e\x\v\c\7\t\f\6\7\6\j\u\e\x\h\5\x\u\r\p\d\k\d\z\1\9\r\f\h\3\d\o\0\i\o\8\c\4\a\d ]] 00:11:34.073 16:04:03 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:34.073 16:04:03 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:11:34.073 [2024-04-15 16:04:03.903473] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:34.073 [2024-04-15 16:04:03.903592] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75896 ] 00:11:34.340 [2024-04-15 16:04:04.051154] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.340 [2024-04-15 16:04:04.107891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.340 [2024-04-15 16:04:04.107991] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:34.600  Copying: 512/512 [B] (average 500 kBps) 00:11:34.600 00:11:34.600 16:04:04 -- dd/posix.sh@93 -- # [[ 95qqkz712q0nm7kbruxnaidq83q32eweetkputpevrteexqy3uu364srcfzk997yxdnbfmsy3zypmnr9t6vwojcbxm81iewqb4vzpuol85vtrup802hgt5b97lzh8kh3epegni29r4muyndfj78p4mpsl49smifpd55x9s31n2dww7tnlxcgeh0uhc26l0tlo0o5yot5l0slvjuzgqgys3nivp0sc6ev70kzyycd8wksf7qbhg05d0gezhlet3xm7nvxiatfqkpmgtdwtbwjvmy7lk2y46r3b5ghoxdu6zn2oc233jv9rugijy04psldzab4kdm3mlg2e45cncxwrp97n0tlwhtsrjt46ki9r0m83m9b1odtvmobi8kyo1rv0lfxwh5glfyezl5qvzfvzg2z1j8bloq69sfgtzwns7o2m30s8g7t3l3p5jseoanj73jao1joni1s90o7lg7bzrfpexvc7tf676juexh5xurpdkdz19rfh3do0io8c4ad == \9\5\q\q\k\z\7\1\2\q\0\n\m\7\k\b\r\u\x\n\a\i\d\q\8\3\q\3\2\e\w\e\e\t\k\p\u\t\p\e\v\r\t\e\e\x\q\y\3\u\u\3\6\4\s\r\c\f\z\k\9\9\7\y\x\d\n\b\f\m\s\y\3\z\y\p\m\n\r\9\t\6\v\w\o\j\c\b\x\m\8\1\i\e\w\q\b\4\v\z\p\u\o\l\8\5\v\t\r\u\p\8\0\2\h\g\t\5\b\9\7\l\z\h\8\k\h\3\e\p\e\g\n\i\2\9\r\4\m\u\y\n\d\f\j\7\8\p\4\m\p\s\l\4\9\s\m\i\f\p\d\5\5\x\9\s\3\1\n\2\d\w\w\7\t\n\l\x\c\g\e\h\0\u\h\c\2\6\l\0\t\l\o\0\o\5\y\o\t\5\l\0\s\l\v\j\u\z\g\q\g\y\s\3\n\i\v\p\0\s\c\6\e\v\7\0\k\z\y\y\c\d\8\w\k\s\f\7\q\b\h\g\0\5\d\0\g\e\z\h\l\e\t\3\x\m\7\n\v\x\i\a\t\f\q\k\p\m\g\t\d\w\t\b\w\j\v\m\y\7\l\k\2\y\4\6\r\3\b\5\g\h\o\x\d\u\6\z\n\2\o\c\2\3\3\j\v\9\r\u\g\i\j\y\0\4\p\s\l\d\z\a\b\4\k\d\m\3\m\l\g\2\e\4\5\c\n\c\x\w\r\p\9\7\n\0\t\l\w\h\t\s\r\j\t\4\6\k\i\9\r\0\m\8\3\m\9\b\1\o\d\t\v\m\o\b\i\8\k\y\o\1\r\v\0\l\f\x\w\h\5\g\l\f\y\e\z\l\5\q\v\z\f\v\z\g\2\z\1\j\8\b\l\o\q\6\9\s\f\g\t\z\w\n\s\7\o\2\m\3\0\s\8\g\7\t\3\l\3\p\5\j\s\e\o\a\n\j\7\3\j\a\o\1\j\o\n\i\1\s\9\0\o\7\l\g\7\b\z\r\f\p\e\x\v\c\7\t\f\6\7\6\j\u\e\x\h\5\x\u\r\p\d\k\d\z\1\9\r\f\h\3\d\o\0\i\o\8\c\4\a\d ]] 00:11:34.600 16:04:04 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:34.600 16:04:04 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:11:34.600 [2024-04-15 16:04:04.431070] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:34.600 [2024-04-15 16:04:04.431176] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75903 ] 00:11:34.858 [2024-04-15 16:04:04.570548] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.858 [2024-04-15 16:04:04.623867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.858 [2024-04-15 16:04:04.623956] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:35.126  Copying: 512/512 [B] (average 125 kBps) 00:11:35.126 00:11:35.126 16:04:04 -- dd/posix.sh@93 -- # [[ 95qqkz712q0nm7kbruxnaidq83q32eweetkputpevrteexqy3uu364srcfzk997yxdnbfmsy3zypmnr9t6vwojcbxm81iewqb4vzpuol85vtrup802hgt5b97lzh8kh3epegni29r4muyndfj78p4mpsl49smifpd55x9s31n2dww7tnlxcgeh0uhc26l0tlo0o5yot5l0slvjuzgqgys3nivp0sc6ev70kzyycd8wksf7qbhg05d0gezhlet3xm7nvxiatfqkpmgtdwtbwjvmy7lk2y46r3b5ghoxdu6zn2oc233jv9rugijy04psldzab4kdm3mlg2e45cncxwrp97n0tlwhtsrjt46ki9r0m83m9b1odtvmobi8kyo1rv0lfxwh5glfyezl5qvzfvzg2z1j8bloq69sfgtzwns7o2m30s8g7t3l3p5jseoanj73jao1joni1s90o7lg7bzrfpexvc7tf676juexh5xurpdkdz19rfh3do0io8c4ad == \9\5\q\q\k\z\7\1\2\q\0\n\m\7\k\b\r\u\x\n\a\i\d\q\8\3\q\3\2\e\w\e\e\t\k\p\u\t\p\e\v\r\t\e\e\x\q\y\3\u\u\3\6\4\s\r\c\f\z\k\9\9\7\y\x\d\n\b\f\m\s\y\3\z\y\p\m\n\r\9\t\6\v\w\o\j\c\b\x\m\8\1\i\e\w\q\b\4\v\z\p\u\o\l\8\5\v\t\r\u\p\8\0\2\h\g\t\5\b\9\7\l\z\h\8\k\h\3\e\p\e\g\n\i\2\9\r\4\m\u\y\n\d\f\j\7\8\p\4\m\p\s\l\4\9\s\m\i\f\p\d\5\5\x\9\s\3\1\n\2\d\w\w\7\t\n\l\x\c\g\e\h\0\u\h\c\2\6\l\0\t\l\o\0\o\5\y\o\t\5\l\0\s\l\v\j\u\z\g\q\g\y\s\3\n\i\v\p\0\s\c\6\e\v\7\0\k\z\y\y\c\d\8\w\k\s\f\7\q\b\h\g\0\5\d\0\g\e\z\h\l\e\t\3\x\m\7\n\v\x\i\a\t\f\q\k\p\m\g\t\d\w\t\b\w\j\v\m\y\7\l\k\2\y\4\6\r\3\b\5\g\h\o\x\d\u\6\z\n\2\o\c\2\3\3\j\v\9\r\u\g\i\j\y\0\4\p\s\l\d\z\a\b\4\k\d\m\3\m\l\g\2\e\4\5\c\n\c\x\w\r\p\9\7\n\0\t\l\w\h\t\s\r\j\t\4\6\k\i\9\r\0\m\8\3\m\9\b\1\o\d\t\v\m\o\b\i\8\k\y\o\1\r\v\0\l\f\x\w\h\5\g\l\f\y\e\z\l\5\q\v\z\f\v\z\g\2\z\1\j\8\b\l\o\q\6\9\s\f\g\t\z\w\n\s\7\o\2\m\3\0\s\8\g\7\t\3\l\3\p\5\j\s\e\o\a\n\j\7\3\j\a\o\1\j\o\n\i\1\s\9\0\o\7\l\g\7\b\z\r\f\p\e\x\v\c\7\t\f\6\7\6\j\u\e\x\h\5\x\u\r\p\d\k\d\z\1\9\r\f\h\3\d\o\0\i\o\8\c\4\a\d ]] 00:11:35.126 16:04:04 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:35.126 16:04:04 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:11:35.126 [2024-04-15 16:04:04.941507] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:35.126 [2024-04-15 16:04:04.941630] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75911 ] 00:11:35.385 [2024-04-15 16:04:05.091931] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.385 [2024-04-15 16:04:05.143715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.385 [2024-04-15 16:04:05.143794] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:35.643  Copying: 512/512 [B] (average 500 kBps) 00:11:35.643 00:11:35.643 16:04:05 -- dd/posix.sh@93 -- # [[ 95qqkz712q0nm7kbruxnaidq83q32eweetkputpevrteexqy3uu364srcfzk997yxdnbfmsy3zypmnr9t6vwojcbxm81iewqb4vzpuol85vtrup802hgt5b97lzh8kh3epegni29r4muyndfj78p4mpsl49smifpd55x9s31n2dww7tnlxcgeh0uhc26l0tlo0o5yot5l0slvjuzgqgys3nivp0sc6ev70kzyycd8wksf7qbhg05d0gezhlet3xm7nvxiatfqkpmgtdwtbwjvmy7lk2y46r3b5ghoxdu6zn2oc233jv9rugijy04psldzab4kdm3mlg2e45cncxwrp97n0tlwhtsrjt46ki9r0m83m9b1odtvmobi8kyo1rv0lfxwh5glfyezl5qvzfvzg2z1j8bloq69sfgtzwns7o2m30s8g7t3l3p5jseoanj73jao1joni1s90o7lg7bzrfpexvc7tf676juexh5xurpdkdz19rfh3do0io8c4ad == \9\5\q\q\k\z\7\1\2\q\0\n\m\7\k\b\r\u\x\n\a\i\d\q\8\3\q\3\2\e\w\e\e\t\k\p\u\t\p\e\v\r\t\e\e\x\q\y\3\u\u\3\6\4\s\r\c\f\z\k\9\9\7\y\x\d\n\b\f\m\s\y\3\z\y\p\m\n\r\9\t\6\v\w\o\j\c\b\x\m\8\1\i\e\w\q\b\4\v\z\p\u\o\l\8\5\v\t\r\u\p\8\0\2\h\g\t\5\b\9\7\l\z\h\8\k\h\3\e\p\e\g\n\i\2\9\r\4\m\u\y\n\d\f\j\7\8\p\4\m\p\s\l\4\9\s\m\i\f\p\d\5\5\x\9\s\3\1\n\2\d\w\w\7\t\n\l\x\c\g\e\h\0\u\h\c\2\6\l\0\t\l\o\0\o\5\y\o\t\5\l\0\s\l\v\j\u\z\g\q\g\y\s\3\n\i\v\p\0\s\c\6\e\v\7\0\k\z\y\y\c\d\8\w\k\s\f\7\q\b\h\g\0\5\d\0\g\e\z\h\l\e\t\3\x\m\7\n\v\x\i\a\t\f\q\k\p\m\g\t\d\w\t\b\w\j\v\m\y\7\l\k\2\y\4\6\r\3\b\5\g\h\o\x\d\u\6\z\n\2\o\c\2\3\3\j\v\9\r\u\g\i\j\y\0\4\p\s\l\d\z\a\b\4\k\d\m\3\m\l\g\2\e\4\5\c\n\c\x\w\r\p\9\7\n\0\t\l\w\h\t\s\r\j\t\4\6\k\i\9\r\0\m\8\3\m\9\b\1\o\d\t\v\m\o\b\i\8\k\y\o\1\r\v\0\l\f\x\w\h\5\g\l\f\y\e\z\l\5\q\v\z\f\v\z\g\2\z\1\j\8\b\l\o\q\6\9\s\f\g\t\z\w\n\s\7\o\2\m\3\0\s\8\g\7\t\3\l\3\p\5\j\s\e\o\a\n\j\7\3\j\a\o\1\j\o\n\i\1\s\9\0\o\7\l\g\7\b\z\r\f\p\e\x\v\c\7\t\f\6\7\6\j\u\e\x\h\5\x\u\r\p\d\k\d\z\1\9\r\f\h\3\d\o\0\i\o\8\c\4\a\d ]] 00:11:35.643 16:04:05 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:11:35.643 16:04:05 -- dd/posix.sh@86 -- # gen_bytes 512 00:11:35.643 16:04:05 -- dd/common.sh@98 -- # xtrace_disable 00:11:35.643 16:04:05 -- common/autotest_common.sh@10 -- # set +x 00:11:35.643 16:04:05 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:35.643 16:04:05 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:11:35.643 [2024-04-15 16:04:05.470656] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:35.643 [2024-04-15 16:04:05.470762] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75918 ] 00:11:35.908 [2024-04-15 16:04:05.623850] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.908 [2024-04-15 16:04:05.678135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.908 [2024-04-15 16:04:05.678226] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:36.172  Copying: 512/512 [B] (average 500 kBps) 00:11:36.172 00:11:36.172 16:04:05 -- dd/posix.sh@93 -- # [[ 4nm1gxo5vqkbi5wkotpf2al3sglg90rpolbqstb3kysjw4es5rv0utzzqbdravixm7ixn5xnqylsgymb239bx2u1cxy6dn53zypywzoaqyxb02blp9i02olv7pqkvrczt8epye7juhu0gj32w5dg08dilf0vthq1xg7lxkn6o9lycnj1uk3404wttjczpxpf5vt2rmm4kzanwt2f8e5yz76187l8jk2krxkw3pz0teuiquylkup7oc6koyd7aabz007thbof2xtoc9lixj84gfc9775hn5ydbo48upm9y2pyhla5mjcy8sg9h08ms312apy4pt8k1pwilsk0d3xugv6nppro860v9z9emymgjpxe4x4gdqirxoz2m1qu83jhzrmbf1pwucu05m48v8y2kgrpaurhvd7fxgy17vh9nn72ixfqyrhxi6xrnkpnlgggngvnx41wv2g1a056i5wluzmk57szrwtaw2ookfzmjp4b3umx2z4joroo692w9qb8 == \4\n\m\1\g\x\o\5\v\q\k\b\i\5\w\k\o\t\p\f\2\a\l\3\s\g\l\g\9\0\r\p\o\l\b\q\s\t\b\3\k\y\s\j\w\4\e\s\5\r\v\0\u\t\z\z\q\b\d\r\a\v\i\x\m\7\i\x\n\5\x\n\q\y\l\s\g\y\m\b\2\3\9\b\x\2\u\1\c\x\y\6\d\n\5\3\z\y\p\y\w\z\o\a\q\y\x\b\0\2\b\l\p\9\i\0\2\o\l\v\7\p\q\k\v\r\c\z\t\8\e\p\y\e\7\j\u\h\u\0\g\j\3\2\w\5\d\g\0\8\d\i\l\f\0\v\t\h\q\1\x\g\7\l\x\k\n\6\o\9\l\y\c\n\j\1\u\k\3\4\0\4\w\t\t\j\c\z\p\x\p\f\5\v\t\2\r\m\m\4\k\z\a\n\w\t\2\f\8\e\5\y\z\7\6\1\8\7\l\8\j\k\2\k\r\x\k\w\3\p\z\0\t\e\u\i\q\u\y\l\k\u\p\7\o\c\6\k\o\y\d\7\a\a\b\z\0\0\7\t\h\b\o\f\2\x\t\o\c\9\l\i\x\j\8\4\g\f\c\9\7\7\5\h\n\5\y\d\b\o\4\8\u\p\m\9\y\2\p\y\h\l\a\5\m\j\c\y\8\s\g\9\h\0\8\m\s\3\1\2\a\p\y\4\p\t\8\k\1\p\w\i\l\s\k\0\d\3\x\u\g\v\6\n\p\p\r\o\8\6\0\v\9\z\9\e\m\y\m\g\j\p\x\e\4\x\4\g\d\q\i\r\x\o\z\2\m\1\q\u\8\3\j\h\z\r\m\b\f\1\p\w\u\c\u\0\5\m\4\8\v\8\y\2\k\g\r\p\a\u\r\h\v\d\7\f\x\g\y\1\7\v\h\9\n\n\7\2\i\x\f\q\y\r\h\x\i\6\x\r\n\k\p\n\l\g\g\g\n\g\v\n\x\4\1\w\v\2\g\1\a\0\5\6\i\5\w\l\u\z\m\k\5\7\s\z\r\w\t\a\w\2\o\o\k\f\z\m\j\p\4\b\3\u\m\x\2\z\4\j\o\r\o\o\6\9\2\w\9\q\b\8 ]] 00:11:36.172 16:04:05 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:36.172 16:04:05 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:11:36.172 [2024-04-15 16:04:06.009050] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:36.172 [2024-04-15 16:04:06.009156] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75926 ] 00:11:36.451 [2024-04-15 16:04:06.153464] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:36.451 [2024-04-15 16:04:06.209782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.452 [2024-04-15 16:04:06.209864] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:36.717  Copying: 512/512 [B] (average 500 kBps) 00:11:36.717 00:11:36.717 16:04:06 -- dd/posix.sh@93 -- # [[ 4nm1gxo5vqkbi5wkotpf2al3sglg90rpolbqstb3kysjw4es5rv0utzzqbdravixm7ixn5xnqylsgymb239bx2u1cxy6dn53zypywzoaqyxb02blp9i02olv7pqkvrczt8epye7juhu0gj32w5dg08dilf0vthq1xg7lxkn6o9lycnj1uk3404wttjczpxpf5vt2rmm4kzanwt2f8e5yz76187l8jk2krxkw3pz0teuiquylkup7oc6koyd7aabz007thbof2xtoc9lixj84gfc9775hn5ydbo48upm9y2pyhla5mjcy8sg9h08ms312apy4pt8k1pwilsk0d3xugv6nppro860v9z9emymgjpxe4x4gdqirxoz2m1qu83jhzrmbf1pwucu05m48v8y2kgrpaurhvd7fxgy17vh9nn72ixfqyrhxi6xrnkpnlgggngvnx41wv2g1a056i5wluzmk57szrwtaw2ookfzmjp4b3umx2z4joroo692w9qb8 == \4\n\m\1\g\x\o\5\v\q\k\b\i\5\w\k\o\t\p\f\2\a\l\3\s\g\l\g\9\0\r\p\o\l\b\q\s\t\b\3\k\y\s\j\w\4\e\s\5\r\v\0\u\t\z\z\q\b\d\r\a\v\i\x\m\7\i\x\n\5\x\n\q\y\l\s\g\y\m\b\2\3\9\b\x\2\u\1\c\x\y\6\d\n\5\3\z\y\p\y\w\z\o\a\q\y\x\b\0\2\b\l\p\9\i\0\2\o\l\v\7\p\q\k\v\r\c\z\t\8\e\p\y\e\7\j\u\h\u\0\g\j\3\2\w\5\d\g\0\8\d\i\l\f\0\v\t\h\q\1\x\g\7\l\x\k\n\6\o\9\l\y\c\n\j\1\u\k\3\4\0\4\w\t\t\j\c\z\p\x\p\f\5\v\t\2\r\m\m\4\k\z\a\n\w\t\2\f\8\e\5\y\z\7\6\1\8\7\l\8\j\k\2\k\r\x\k\w\3\p\z\0\t\e\u\i\q\u\y\l\k\u\p\7\o\c\6\k\o\y\d\7\a\a\b\z\0\0\7\t\h\b\o\f\2\x\t\o\c\9\l\i\x\j\8\4\g\f\c\9\7\7\5\h\n\5\y\d\b\o\4\8\u\p\m\9\y\2\p\y\h\l\a\5\m\j\c\y\8\s\g\9\h\0\8\m\s\3\1\2\a\p\y\4\p\t\8\k\1\p\w\i\l\s\k\0\d\3\x\u\g\v\6\n\p\p\r\o\8\6\0\v\9\z\9\e\m\y\m\g\j\p\x\e\4\x\4\g\d\q\i\r\x\o\z\2\m\1\q\u\8\3\j\h\z\r\m\b\f\1\p\w\u\c\u\0\5\m\4\8\v\8\y\2\k\g\r\p\a\u\r\h\v\d\7\f\x\g\y\1\7\v\h\9\n\n\7\2\i\x\f\q\y\r\h\x\i\6\x\r\n\k\p\n\l\g\g\g\n\g\v\n\x\4\1\w\v\2\g\1\a\0\5\6\i\5\w\l\u\z\m\k\5\7\s\z\r\w\t\a\w\2\o\o\k\f\z\m\j\p\4\b\3\u\m\x\2\z\4\j\o\r\o\o\6\9\2\w\9\q\b\8 ]] 00:11:36.717 16:04:06 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:36.717 16:04:06 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:11:36.717 [2024-04-15 16:04:06.520857] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:36.717 [2024-04-15 16:04:06.520940] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75933 ] 00:11:36.717 [2024-04-15 16:04:06.658242] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:36.974 [2024-04-15 16:04:06.707262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.974 [2024-04-15 16:04:06.707343] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:37.234  Copying: 512/512 [B] (average 166 kBps) 00:11:37.234 00:11:37.234 16:04:06 -- dd/posix.sh@93 -- # [[ 4nm1gxo5vqkbi5wkotpf2al3sglg90rpolbqstb3kysjw4es5rv0utzzqbdravixm7ixn5xnqylsgymb239bx2u1cxy6dn53zypywzoaqyxb02blp9i02olv7pqkvrczt8epye7juhu0gj32w5dg08dilf0vthq1xg7lxkn6o9lycnj1uk3404wttjczpxpf5vt2rmm4kzanwt2f8e5yz76187l8jk2krxkw3pz0teuiquylkup7oc6koyd7aabz007thbof2xtoc9lixj84gfc9775hn5ydbo48upm9y2pyhla5mjcy8sg9h08ms312apy4pt8k1pwilsk0d3xugv6nppro860v9z9emymgjpxe4x4gdqirxoz2m1qu83jhzrmbf1pwucu05m48v8y2kgrpaurhvd7fxgy17vh9nn72ixfqyrhxi6xrnkpnlgggngvnx41wv2g1a056i5wluzmk57szrwtaw2ookfzmjp4b3umx2z4joroo692w9qb8 == \4\n\m\1\g\x\o\5\v\q\k\b\i\5\w\k\o\t\p\f\2\a\l\3\s\g\l\g\9\0\r\p\o\l\b\q\s\t\b\3\k\y\s\j\w\4\e\s\5\r\v\0\u\t\z\z\q\b\d\r\a\v\i\x\m\7\i\x\n\5\x\n\q\y\l\s\g\y\m\b\2\3\9\b\x\2\u\1\c\x\y\6\d\n\5\3\z\y\p\y\w\z\o\a\q\y\x\b\0\2\b\l\p\9\i\0\2\o\l\v\7\p\q\k\v\r\c\z\t\8\e\p\y\e\7\j\u\h\u\0\g\j\3\2\w\5\d\g\0\8\d\i\l\f\0\v\t\h\q\1\x\g\7\l\x\k\n\6\o\9\l\y\c\n\j\1\u\k\3\4\0\4\w\t\t\j\c\z\p\x\p\f\5\v\t\2\r\m\m\4\k\z\a\n\w\t\2\f\8\e\5\y\z\7\6\1\8\7\l\8\j\k\2\k\r\x\k\w\3\p\z\0\t\e\u\i\q\u\y\l\k\u\p\7\o\c\6\k\o\y\d\7\a\a\b\z\0\0\7\t\h\b\o\f\2\x\t\o\c\9\l\i\x\j\8\4\g\f\c\9\7\7\5\h\n\5\y\d\b\o\4\8\u\p\m\9\y\2\p\y\h\l\a\5\m\j\c\y\8\s\g\9\h\0\8\m\s\3\1\2\a\p\y\4\p\t\8\k\1\p\w\i\l\s\k\0\d\3\x\u\g\v\6\n\p\p\r\o\8\6\0\v\9\z\9\e\m\y\m\g\j\p\x\e\4\x\4\g\d\q\i\r\x\o\z\2\m\1\q\u\8\3\j\h\z\r\m\b\f\1\p\w\u\c\u\0\5\m\4\8\v\8\y\2\k\g\r\p\a\u\r\h\v\d\7\f\x\g\y\1\7\v\h\9\n\n\7\2\i\x\f\q\y\r\h\x\i\6\x\r\n\k\p\n\l\g\g\g\n\g\v\n\x\4\1\w\v\2\g\1\a\0\5\6\i\5\w\l\u\z\m\k\5\7\s\z\r\w\t\a\w\2\o\o\k\f\z\m\j\p\4\b\3\u\m\x\2\z\4\j\o\r\o\o\6\9\2\w\9\q\b\8 ]] 00:11:37.234 16:04:06 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:37.234 16:04:06 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:11:37.234 [2024-04-15 16:04:07.023421] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:37.234 [2024-04-15 16:04:07.023525] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75941 ] 00:11:37.234 [2024-04-15 16:04:07.168886] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.493 [2024-04-15 16:04:07.241236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.493 [2024-04-15 16:04:07.241344] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:37.751  Copying: 512/512 [B] (average 125 kBps) 00:11:37.751 00:11:37.752 16:04:07 -- dd/posix.sh@93 -- # [[ 4nm1gxo5vqkbi5wkotpf2al3sglg90rpolbqstb3kysjw4es5rv0utzzqbdravixm7ixn5xnqylsgymb239bx2u1cxy6dn53zypywzoaqyxb02blp9i02olv7pqkvrczt8epye7juhu0gj32w5dg08dilf0vthq1xg7lxkn6o9lycnj1uk3404wttjczpxpf5vt2rmm4kzanwt2f8e5yz76187l8jk2krxkw3pz0teuiquylkup7oc6koyd7aabz007thbof2xtoc9lixj84gfc9775hn5ydbo48upm9y2pyhla5mjcy8sg9h08ms312apy4pt8k1pwilsk0d3xugv6nppro860v9z9emymgjpxe4x4gdqirxoz2m1qu83jhzrmbf1pwucu05m48v8y2kgrpaurhvd7fxgy17vh9nn72ixfqyrhxi6xrnkpnlgggngvnx41wv2g1a056i5wluzmk57szrwtaw2ookfzmjp4b3umx2z4joroo692w9qb8 == \4\n\m\1\g\x\o\5\v\q\k\b\i\5\w\k\o\t\p\f\2\a\l\3\s\g\l\g\9\0\r\p\o\l\b\q\s\t\b\3\k\y\s\j\w\4\e\s\5\r\v\0\u\t\z\z\q\b\d\r\a\v\i\x\m\7\i\x\n\5\x\n\q\y\l\s\g\y\m\b\2\3\9\b\x\2\u\1\c\x\y\6\d\n\5\3\z\y\p\y\w\z\o\a\q\y\x\b\0\2\b\l\p\9\i\0\2\o\l\v\7\p\q\k\v\r\c\z\t\8\e\p\y\e\7\j\u\h\u\0\g\j\3\2\w\5\d\g\0\8\d\i\l\f\0\v\t\h\q\1\x\g\7\l\x\k\n\6\o\9\l\y\c\n\j\1\u\k\3\4\0\4\w\t\t\j\c\z\p\x\p\f\5\v\t\2\r\m\m\4\k\z\a\n\w\t\2\f\8\e\5\y\z\7\6\1\8\7\l\8\j\k\2\k\r\x\k\w\3\p\z\0\t\e\u\i\q\u\y\l\k\u\p\7\o\c\6\k\o\y\d\7\a\a\b\z\0\0\7\t\h\b\o\f\2\x\t\o\c\9\l\i\x\j\8\4\g\f\c\9\7\7\5\h\n\5\y\d\b\o\4\8\u\p\m\9\y\2\p\y\h\l\a\5\m\j\c\y\8\s\g\9\h\0\8\m\s\3\1\2\a\p\y\4\p\t\8\k\1\p\w\i\l\s\k\0\d\3\x\u\g\v\6\n\p\p\r\o\8\6\0\v\9\z\9\e\m\y\m\g\j\p\x\e\4\x\4\g\d\q\i\r\x\o\z\2\m\1\q\u\8\3\j\h\z\r\m\b\f\1\p\w\u\c\u\0\5\m\4\8\v\8\y\2\k\g\r\p\a\u\r\h\v\d\7\f\x\g\y\1\7\v\h\9\n\n\7\2\i\x\f\q\y\r\h\x\i\6\x\r\n\k\p\n\l\g\g\g\n\g\v\n\x\4\1\w\v\2\g\1\a\0\5\6\i\5\w\l\u\z\m\k\5\7\s\z\r\w\t\a\w\2\o\o\k\f\z\m\j\p\4\b\3\u\m\x\2\z\4\j\o\r\o\o\6\9\2\w\9\q\b\8 ]] 00:11:37.752 00:11:37.752 real 0m4.265s 00:11:37.752 user 0m2.160s 00:11:37.752 sys 0m1.121s 00:11:37.752 16:04:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:37.752 16:04:07 -- common/autotest_common.sh@10 -- # set +x 00:11:37.752 ************************************ 00:11:37.752 END TEST dd_flags_misc_forced_aio 00:11:37.752 ************************************ 00:11:37.752 16:04:07 -- dd/posix.sh@1 -- # cleanup 00:11:37.752 16:04:07 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:11:37.752 16:04:07 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:11:37.752 00:11:37.752 real 0m20.047s 00:11:37.752 user 0m8.977s 00:11:37.752 sys 0m6.572s 00:11:37.752 16:04:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:37.752 16:04:07 -- common/autotest_common.sh@10 -- # set +x 00:11:37.752 ************************************ 00:11:37.752 END TEST spdk_dd_posix 00:11:37.752 ************************************ 00:11:37.752 16:04:07 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:11:37.752 16:04:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:37.752 16:04:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:37.752 16:04:07 -- common/autotest_common.sh@10 -- # set +x 00:11:37.752 ************************************ 00:11:37.752 START TEST spdk_dd_malloc 00:11:37.752 ************************************ 00:11:37.752 16:04:07 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:11:38.041 * Looking for test storage... 00:11:38.041 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:11:38.041 16:04:07 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:38.041 16:04:07 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:38.041 16:04:07 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:38.041 16:04:07 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:38.041 16:04:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.041 16:04:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.041 16:04:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.041 16:04:07 -- paths/export.sh@5 -- # export PATH 00:11:38.041 16:04:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.041 16:04:07 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:11:38.041 16:04:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:38.041 16:04:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:38.041 16:04:07 -- common/autotest_common.sh@10 -- # set +x 00:11:38.041 ************************************ 00:11:38.041 START TEST dd_malloc_copy 00:11:38.041 ************************************ 00:11:38.041 16:04:07 -- common/autotest_common.sh@1111 -- # malloc_copy 00:11:38.041 16:04:07 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:11:38.041 16:04:07 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:11:38.041 16:04:07 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:11:38.041 16:04:07 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:11:38.041 16:04:07 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:11:38.041 16:04:07 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:11:38.041 16:04:07 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:11:38.041 16:04:07 -- dd/malloc.sh@28 -- # gen_conf 00:11:38.041 16:04:07 -- dd/common.sh@31 -- # xtrace_disable 00:11:38.041 16:04:07 -- common/autotest_common.sh@10 -- # set +x 00:11:38.041 { 00:11:38.041 "subsystems": [ 00:11:38.041 { 00:11:38.041 "subsystem": "bdev", 00:11:38.041 "config": [ 00:11:38.041 { 00:11:38.041 "params": { 00:11:38.041 "block_size": 512, 00:11:38.041 "num_blocks": 1048576, 00:11:38.041 "name": "malloc0" 00:11:38.041 }, 00:11:38.041 "method": "bdev_malloc_create" 00:11:38.041 }, 00:11:38.041 { 00:11:38.041 "params": { 00:11:38.041 "block_size": 512, 00:11:38.042 "num_blocks": 1048576, 00:11:38.042 "name": "malloc1" 00:11:38.042 }, 00:11:38.042 "method": "bdev_malloc_create" 00:11:38.042 }, 00:11:38.042 { 00:11:38.042 "method": "bdev_wait_for_examine" 00:11:38.042 } 00:11:38.042 ] 00:11:38.042 } 00:11:38.042 ] 00:11:38.042 } 00:11:38.042 [2024-04-15 16:04:07.936616] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:38.042 [2024-04-15 16:04:07.936714] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76025 ] 00:11:38.299 [2024-04-15 16:04:08.083057] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.299 [2024-04-15 16:04:08.137569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.299 [2024-04-15 16:04:08.138591] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:41.469  Copying: 224/512 [MB] (224 MBps) Copying: 462/512 [MB] (237 MBps) Copying: 512/512 [MB] (average 231 MBps) 00:11:41.469 00:11:41.469 16:04:11 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:11:41.469 16:04:11 -- dd/malloc.sh@33 -- # gen_conf 00:11:41.469 16:04:11 -- dd/common.sh@31 -- # xtrace_disable 00:11:41.469 16:04:11 -- common/autotest_common.sh@10 -- # set +x 00:11:41.469 { 00:11:41.469 "subsystems": [ 00:11:41.469 { 00:11:41.469 "subsystem": "bdev", 00:11:41.469 "config": [ 00:11:41.469 { 00:11:41.469 "params": { 00:11:41.469 "block_size": 512, 00:11:41.469 "num_blocks": 1048576, 00:11:41.469 "name": "malloc0" 00:11:41.469 }, 00:11:41.469 "method": "bdev_malloc_create" 00:11:41.469 }, 00:11:41.469 { 00:11:41.469 "params": { 00:11:41.469 "block_size": 512, 00:11:41.469 "num_blocks": 1048576, 00:11:41.469 "name": "malloc1" 00:11:41.469 }, 00:11:41.469 "method": "bdev_malloc_create" 00:11:41.469 }, 00:11:41.469 { 00:11:41.469 "method": "bdev_wait_for_examine" 00:11:41.469 } 00:11:41.469 ] 00:11:41.469 } 00:11:41.469 ] 00:11:41.469 } 00:11:41.469 [2024-04-15 16:04:11.221321] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:41.469 [2024-04-15 16:04:11.221431] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76067 ] 00:11:41.469 [2024-04-15 16:04:11.361528] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.469 [2024-04-15 16:04:11.409393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.469 [2024-04-15 16:04:11.410240] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:44.358  Copying: 239/512 [MB] (239 MBps) Copying: 480/512 [MB] (241 MBps) Copying: 512/512 [MB] (average 240 MBps) 00:11:44.358 00:11:44.616 00:11:44.616 real 0m6.441s 00:11:44.616 user 0m5.543s 00:11:44.616 sys 0m0.711s 00:11:44.616 16:04:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:44.616 16:04:14 -- common/autotest_common.sh@10 -- # set +x 00:11:44.616 ************************************ 00:11:44.616 END TEST dd_malloc_copy 00:11:44.616 ************************************ 00:11:44.616 ************************************ 00:11:44.616 END TEST spdk_dd_malloc 00:11:44.616 ************************************ 00:11:44.616 00:11:44.616 real 0m6.671s 00:11:44.616 user 0m5.624s 00:11:44.616 sys 0m0.853s 00:11:44.616 16:04:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:44.616 16:04:14 -- common/autotest_common.sh@10 -- # set +x 00:11:44.616 16:04:14 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:11:44.616 16:04:14 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:44.616 16:04:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:44.616 16:04:14 -- common/autotest_common.sh@10 -- # set +x 00:11:44.616 ************************************ 00:11:44.616 START TEST spdk_dd_bdev_to_bdev 00:11:44.616 ************************************ 00:11:44.616 16:04:14 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:11:44.873 * Looking for test storage... 00:11:44.873 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:11:44.873 16:04:14 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:44.873 16:04:14 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:44.873 16:04:14 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:44.873 16:04:14 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:44.873 16:04:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.874 16:04:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.874 16:04:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.874 16:04:14 -- paths/export.sh@5 -- # export PATH 00:11:44.874 16:04:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.874 16:04:14 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:11:44.874 16:04:14 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:11:44.874 16:04:14 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:11:44.874 16:04:14 -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:11:44.874 16:04:14 -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:11:44.874 16:04:14 -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:11:44.874 16:04:14 -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:11:44.874 16:04:14 -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:11:44.874 16:04:14 -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:11:44.874 16:04:14 -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:11:44.874 16:04:14 -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:11:44.874 16:04:14 -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:11:44.874 16:04:14 -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:11:44.874 16:04:14 -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:11:44.874 16:04:14 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:44.874 16:04:14 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:44.874 16:04:14 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:11:44.874 16:04:14 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:11:44.874 16:04:14 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:11:44.874 16:04:14 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:11:44.874 16:04:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:44.874 16:04:14 -- common/autotest_common.sh@10 -- # set +x 00:11:44.874 ************************************ 00:11:44.874 START TEST dd_inflate_file 00:11:44.874 ************************************ 00:11:44.874 16:04:14 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:11:44.874 [2024-04-15 16:04:14.737369] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:44.874 [2024-04-15 16:04:14.737497] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76180 ] 00:11:45.131 [2024-04-15 16:04:14.880291] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:45.131 [2024-04-15 16:04:14.942078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.131 [2024-04-15 16:04:14.942374] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:45.388  Copying: 64/64 [MB] (average 1049 MBps) 00:11:45.388 00:11:45.388 00:11:45.388 real 0m0.590s 00:11:45.388 user 0m0.347s 00:11:45.388 sys 0m0.324s 00:11:45.388 16:04:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:45.388 16:04:15 -- common/autotest_common.sh@10 -- # set +x 00:11:45.388 ************************************ 00:11:45.388 END TEST dd_inflate_file 00:11:45.388 ************************************ 00:11:45.388 16:04:15 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:11:45.388 16:04:15 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:11:45.388 16:04:15 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:11:45.388 16:04:15 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:11:45.388 16:04:15 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:11:45.388 16:04:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:45.388 16:04:15 -- common/autotest_common.sh@10 -- # set +x 00:11:45.388 16:04:15 -- dd/common.sh@31 -- # xtrace_disable 00:11:45.388 16:04:15 -- common/autotest_common.sh@10 -- # set +x 00:11:45.652 { 00:11:45.652 "subsystems": [ 00:11:45.652 { 00:11:45.652 "subsystem": "bdev", 00:11:45.652 "config": [ 00:11:45.652 { 00:11:45.652 "params": { 00:11:45.652 "trtype": "pcie", 00:11:45.652 "traddr": "0000:00:10.0", 00:11:45.652 "name": "Nvme0" 00:11:45.653 }, 00:11:45.653 "method": "bdev_nvme_attach_controller" 00:11:45.653 }, 00:11:45.653 { 00:11:45.653 "params": { 00:11:45.653 "trtype": "pcie", 00:11:45.653 "traddr": "0000:00:11.0", 00:11:45.653 "name": "Nvme1" 00:11:45.653 }, 00:11:45.653 "method": "bdev_nvme_attach_controller" 00:11:45.653 }, 00:11:45.653 { 00:11:45.653 "method": "bdev_wait_for_examine" 00:11:45.653 } 00:11:45.653 ] 00:11:45.653 } 00:11:45.653 ] 00:11:45.653 } 00:11:45.653 ************************************ 00:11:45.653 START TEST dd_copy_to_out_bdev 00:11:45.653 ************************************ 00:11:45.653 16:04:15 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:11:45.653 [2024-04-15 16:04:15.452217] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:45.653 [2024-04-15 16:04:15.452317] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76221 ] 00:11:45.653 [2024-04-15 16:04:15.595154] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:45.912 [2024-04-15 16:04:15.647341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.912 [2024-04-15 16:04:15.648464] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:47.118  Copying: 64/64 [MB] (average 71 MBps) 00:11:47.118 00:11:47.118 00:11:47.118 real 0m1.567s 00:11:47.118 user 0m1.294s 00:11:47.118 sys 0m1.207s 00:11:47.118 16:04:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:47.118 ************************************ 00:11:47.118 END TEST dd_copy_to_out_bdev 00:11:47.118 ************************************ 00:11:47.118 16:04:16 -- common/autotest_common.sh@10 -- # set +x 00:11:47.118 16:04:17 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:11:47.118 16:04:17 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:11:47.118 16:04:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:47.118 16:04:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:47.118 16:04:17 -- common/autotest_common.sh@10 -- # set +x 00:11:47.383 ************************************ 00:11:47.383 START TEST dd_offset_magic 00:11:47.384 ************************************ 00:11:47.384 16:04:17 -- common/autotest_common.sh@1111 -- # offset_magic 00:11:47.384 16:04:17 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:11:47.384 16:04:17 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:11:47.384 16:04:17 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:11:47.384 16:04:17 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:11:47.384 16:04:17 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:11:47.384 16:04:17 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:11:47.384 16:04:17 -- dd/common.sh@31 -- # xtrace_disable 00:11:47.384 16:04:17 -- common/autotest_common.sh@10 -- # set +x 00:11:47.384 [2024-04-15 16:04:17.156670] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:47.384 [2024-04-15 16:04:17.157182] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76268 ] 00:11:47.384 { 00:11:47.384 "subsystems": [ 00:11:47.384 { 00:11:47.384 "subsystem": "bdev", 00:11:47.384 "config": [ 00:11:47.384 { 00:11:47.384 "params": { 00:11:47.384 "trtype": "pcie", 00:11:47.384 "traddr": "0000:00:10.0", 00:11:47.384 "name": "Nvme0" 00:11:47.384 }, 00:11:47.384 "method": "bdev_nvme_attach_controller" 00:11:47.384 }, 00:11:47.384 { 00:11:47.384 "params": { 00:11:47.384 "trtype": "pcie", 00:11:47.384 "traddr": "0000:00:11.0", 00:11:47.384 "name": "Nvme1" 00:11:47.384 }, 00:11:47.384 "method": "bdev_nvme_attach_controller" 00:11:47.384 }, 00:11:47.384 { 00:11:47.384 "method": "bdev_wait_for_examine" 00:11:47.384 } 00:11:47.384 ] 00:11:47.384 } 00:11:47.384 ] 00:11:47.384 } 00:11:47.384 [2024-04-15 16:04:17.305517] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:47.651 [2024-04-15 16:04:17.356855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.651 [2024-04-15 16:04:17.357783] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:47.921  Copying: 65/65 [MB] (average 902 MBps) 00:11:47.921 00:11:47.921 16:04:17 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:11:47.921 16:04:17 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:11:47.921 16:04:17 -- dd/common.sh@31 -- # xtrace_disable 00:11:47.921 16:04:17 -- common/autotest_common.sh@10 -- # set +x 00:11:48.194 [2024-04-15 16:04:17.910208] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:48.194 [2024-04-15 16:04:17.910305] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76287 ] 00:11:48.194 { 00:11:48.194 "subsystems": [ 00:11:48.194 { 00:11:48.194 "subsystem": "bdev", 00:11:48.194 "config": [ 00:11:48.194 { 00:11:48.194 "params": { 00:11:48.194 "trtype": "pcie", 00:11:48.194 "traddr": "0000:00:10.0", 00:11:48.194 "name": "Nvme0" 00:11:48.194 }, 00:11:48.194 "method": "bdev_nvme_attach_controller" 00:11:48.194 }, 00:11:48.194 { 00:11:48.194 "params": { 00:11:48.194 "trtype": "pcie", 00:11:48.194 "traddr": "0000:00:11.0", 00:11:48.194 "name": "Nvme1" 00:11:48.194 }, 00:11:48.194 "method": "bdev_nvme_attach_controller" 00:11:48.194 }, 00:11:48.194 { 00:11:48.194 "method": "bdev_wait_for_examine" 00:11:48.194 } 00:11:48.194 ] 00:11:48.194 } 00:11:48.194 ] 00:11:48.194 } 00:11:48.194 [2024-04-15 16:04:18.046239] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.194 [2024-04-15 16:04:18.091057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.194 [2024-04-15 16:04:18.091819] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:48.718  Copying: 1024/1024 [kB] (average 500 MBps) 00:11:48.718 00:11:48.718 16:04:18 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:11:48.718 16:04:18 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:11:48.718 16:04:18 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:11:48.718 16:04:18 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:11:48.718 16:04:18 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:11:48.718 16:04:18 -- dd/common.sh@31 -- # xtrace_disable 00:11:48.718 16:04:18 -- common/autotest_common.sh@10 -- # set +x 00:11:48.718 [2024-04-15 16:04:18.498145] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:48.718 [2024-04-15 16:04:18.498221] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76304 ] 00:11:48.718 { 00:11:48.718 "subsystems": [ 00:11:48.718 { 00:11:48.718 "subsystem": "bdev", 00:11:48.718 "config": [ 00:11:48.718 { 00:11:48.718 "params": { 00:11:48.718 "trtype": "pcie", 00:11:48.718 "traddr": "0000:00:10.0", 00:11:48.718 "name": "Nvme0" 00:11:48.718 }, 00:11:48.718 "method": "bdev_nvme_attach_controller" 00:11:48.718 }, 00:11:48.718 { 00:11:48.718 "params": { 00:11:48.718 "trtype": "pcie", 00:11:48.718 "traddr": "0000:00:11.0", 00:11:48.718 "name": "Nvme1" 00:11:48.718 }, 00:11:48.718 "method": "bdev_nvme_attach_controller" 00:11:48.718 }, 00:11:48.718 { 00:11:48.718 "method": "bdev_wait_for_examine" 00:11:48.718 } 00:11:48.718 ] 00:11:48.718 } 00:11:48.718 ] 00:11:48.718 } 00:11:48.718 [2024-04-15 16:04:18.631864] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.718 [2024-04-15 16:04:18.680176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.718 [2024-04-15 16:04:18.680865] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:49.233  Copying: 65/65 [MB] (average 942 MBps) 00:11:49.233 00:11:49.233 16:04:19 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:11:49.233 16:04:19 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:11:49.233 16:04:19 -- dd/common.sh@31 -- # xtrace_disable 00:11:49.233 16:04:19 -- common/autotest_common.sh@10 -- # set +x 00:11:49.491 [2024-04-15 16:04:19.217450] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:49.491 [2024-04-15 16:04:19.218149] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76324 ] 00:11:49.491 { 00:11:49.491 "subsystems": [ 00:11:49.491 { 00:11:49.491 "subsystem": "bdev", 00:11:49.491 "config": [ 00:11:49.491 { 00:11:49.491 "params": { 00:11:49.491 "trtype": "pcie", 00:11:49.491 "traddr": "0000:00:10.0", 00:11:49.491 "name": "Nvme0" 00:11:49.491 }, 00:11:49.491 "method": "bdev_nvme_attach_controller" 00:11:49.491 }, 00:11:49.491 { 00:11:49.491 "params": { 00:11:49.491 "trtype": "pcie", 00:11:49.491 "traddr": "0000:00:11.0", 00:11:49.491 "name": "Nvme1" 00:11:49.491 }, 00:11:49.491 "method": "bdev_nvme_attach_controller" 00:11:49.491 }, 00:11:49.491 { 00:11:49.491 "method": "bdev_wait_for_examine" 00:11:49.491 } 00:11:49.491 ] 00:11:49.491 } 00:11:49.491 ] 00:11:49.491 } 00:11:49.491 [2024-04-15 16:04:19.362078] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:49.491 [2024-04-15 16:04:19.414669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.491 [2024-04-15 16:04:19.415364] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:50.023  Copying: 1024/1024 [kB] (average 500 MBps) 00:11:50.023 00:11:50.023 16:04:19 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:11:50.023 16:04:19 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:11:50.023 00:11:50.023 real 0m2.687s 00:11:50.023 user 0m1.895s 00:11:50.023 sys 0m0.814s 00:11:50.023 16:04:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:50.023 16:04:19 -- common/autotest_common.sh@10 -- # set +x 00:11:50.023 ************************************ 00:11:50.023 END TEST dd_offset_magic 00:11:50.023 ************************************ 00:11:50.023 16:04:19 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:11:50.023 16:04:19 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:11:50.023 16:04:19 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:11:50.023 16:04:19 -- dd/common.sh@11 -- # local nvme_ref= 00:11:50.023 16:04:19 -- dd/common.sh@12 -- # local size=4194330 00:11:50.023 16:04:19 -- dd/common.sh@14 -- # local bs=1048576 00:11:50.023 16:04:19 -- dd/common.sh@15 -- # local count=5 00:11:50.023 16:04:19 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:11:50.023 16:04:19 -- dd/common.sh@18 -- # gen_conf 00:11:50.023 16:04:19 -- dd/common.sh@31 -- # xtrace_disable 00:11:50.023 16:04:19 -- common/autotest_common.sh@10 -- # set +x 00:11:50.023 [2024-04-15 16:04:19.893468] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:50.023 [2024-04-15 16:04:19.894055] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76350 ] 00:11:50.023 { 00:11:50.023 "subsystems": [ 00:11:50.023 { 00:11:50.023 "subsystem": "bdev", 00:11:50.023 "config": [ 00:11:50.023 { 00:11:50.023 "params": { 00:11:50.023 "trtype": "pcie", 00:11:50.023 "traddr": "0000:00:10.0", 00:11:50.023 "name": "Nvme0" 00:11:50.023 }, 00:11:50.023 "method": "bdev_nvme_attach_controller" 00:11:50.023 }, 00:11:50.023 { 00:11:50.023 "params": { 00:11:50.023 "trtype": "pcie", 00:11:50.023 "traddr": "0000:00:11.0", 00:11:50.023 "name": "Nvme1" 00:11:50.023 }, 00:11:50.023 "method": "bdev_nvme_attach_controller" 00:11:50.023 }, 00:11:50.023 { 00:11:50.023 "method": "bdev_wait_for_examine" 00:11:50.023 } 00:11:50.023 ] 00:11:50.023 } 00:11:50.023 ] 00:11:50.023 } 00:11:50.280 [2024-04-15 16:04:20.036595] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.280 [2024-04-15 16:04:20.083526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.280 [2024-04-15 16:04:20.084429] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:50.538  Copying: 5120/5120 [kB] (average 1250 MBps) 00:11:50.538 00:11:50.538 16:04:20 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:11:50.538 16:04:20 -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:11:50.538 16:04:20 -- dd/common.sh@11 -- # local nvme_ref= 00:11:50.538 16:04:20 -- dd/common.sh@12 -- # local size=4194330 00:11:50.538 16:04:20 -- dd/common.sh@14 -- # local bs=1048576 00:11:50.538 16:04:20 -- dd/common.sh@15 -- # local count=5 00:11:50.538 16:04:20 -- dd/common.sh@18 -- # gen_conf 00:11:50.538 16:04:20 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:11:50.538 16:04:20 -- dd/common.sh@31 -- # xtrace_disable 00:11:50.538 16:04:20 -- common/autotest_common.sh@10 -- # set +x 00:11:50.538 [2024-04-15 16:04:20.498831] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:50.538 [2024-04-15 16:04:20.498919] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76371 ] 00:11:50.796 { 00:11:50.796 "subsystems": [ 00:11:50.796 { 00:11:50.796 "subsystem": "bdev", 00:11:50.796 "config": [ 00:11:50.796 { 00:11:50.796 "params": { 00:11:50.796 "trtype": "pcie", 00:11:50.796 "traddr": "0000:00:10.0", 00:11:50.796 "name": "Nvme0" 00:11:50.796 }, 00:11:50.796 "method": "bdev_nvme_attach_controller" 00:11:50.796 }, 00:11:50.796 { 00:11:50.796 "params": { 00:11:50.796 "trtype": "pcie", 00:11:50.796 "traddr": "0000:00:11.0", 00:11:50.796 "name": "Nvme1" 00:11:50.796 }, 00:11:50.796 "method": "bdev_nvme_attach_controller" 00:11:50.796 }, 00:11:50.796 { 00:11:50.796 "method": "bdev_wait_for_examine" 00:11:50.796 } 00:11:50.796 ] 00:11:50.796 } 00:11:50.796 ] 00:11:50.796 } 00:11:50.796 [2024-04-15 16:04:20.634208] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.796 [2024-04-15 16:04:20.679654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.796 [2024-04-15 16:04:20.680530] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:51.311  Copying: 5120/5120 [kB] (average 833 MBps) 00:11:51.311 00:11:51.311 16:04:21 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:11:51.311 00:11:51.311 real 0m6.574s 00:11:51.311 user 0m4.586s 00:11:51.311 sys 0m3.110s 00:11:51.311 16:04:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:51.311 16:04:21 -- common/autotest_common.sh@10 -- # set +x 00:11:51.311 ************************************ 00:11:51.311 END TEST spdk_dd_bdev_to_bdev 00:11:51.311 ************************************ 00:11:51.311 16:04:21 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:11:51.311 16:04:21 -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:11:51.311 16:04:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:51.311 16:04:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:51.311 16:04:21 -- common/autotest_common.sh@10 -- # set +x 00:11:51.311 ************************************ 00:11:51.311 START TEST spdk_dd_uring 00:11:51.311 ************************************ 00:11:51.311 16:04:21 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:11:51.569 * Looking for test storage... 00:11:51.569 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:11:51.569 16:04:21 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:51.569 16:04:21 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:51.569 16:04:21 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:51.569 16:04:21 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:51.569 16:04:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.569 16:04:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.569 16:04:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.569 16:04:21 -- paths/export.sh@5 -- # export PATH 00:11:51.569 16:04:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.569 16:04:21 -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:11:51.569 16:04:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:51.569 16:04:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:51.569 16:04:21 -- common/autotest_common.sh@10 -- # set +x 00:11:51.569 ************************************ 00:11:51.569 START TEST dd_uring_copy 00:11:51.569 ************************************ 00:11:51.569 16:04:21 -- common/autotest_common.sh@1111 -- # uring_zram_copy 00:11:51.569 16:04:21 -- dd/uring.sh@15 -- # local zram_dev_id 00:11:51.569 16:04:21 -- dd/uring.sh@16 -- # local magic 00:11:51.569 16:04:21 -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:11:51.569 16:04:21 -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:11:51.569 16:04:21 -- dd/uring.sh@19 -- # local verify_magic 00:11:51.569 16:04:21 -- dd/uring.sh@21 -- # init_zram 00:11:51.569 16:04:21 -- dd/common.sh@163 -- # [[ -e /sys/class/zram-control ]] 00:11:51.569 16:04:21 -- dd/common.sh@164 -- # return 00:11:51.569 16:04:21 -- dd/uring.sh@22 -- # create_zram_dev 00:11:51.569 16:04:21 -- dd/common.sh@168 -- # cat /sys/class/zram-control/hot_add 00:11:51.569 16:04:21 -- dd/uring.sh@22 -- # zram_dev_id=1 00:11:51.569 16:04:21 -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:11:51.569 16:04:21 -- dd/common.sh@181 -- # local id=1 00:11:51.569 16:04:21 -- dd/common.sh@182 -- # local size=512M 00:11:51.569 16:04:21 -- dd/common.sh@184 -- # [[ -e /sys/block/zram1 ]] 00:11:51.569 16:04:21 -- dd/common.sh@186 -- # echo 512M 00:11:51.569 16:04:21 -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:11:51.569 16:04:21 -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:11:51.569 16:04:21 -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:11:51.569 16:04:21 -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:11:51.569 16:04:21 -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:11:51.569 16:04:21 -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:11:51.569 16:04:21 -- dd/uring.sh@41 -- # gen_bytes 1024 00:11:51.569 16:04:21 -- dd/common.sh@98 -- # xtrace_disable 00:11:51.569 16:04:21 -- common/autotest_common.sh@10 -- # set +x 00:11:51.569 16:04:21 -- dd/uring.sh@41 -- # magic=06ceu9c3litfklouvumnuelzdubhz3kavdu06fzl5ouec6c48of7y8pdc4p9gbs93apsq6y7xoz2izdpxpaa92gtay0khwl0wpgio2d6loeflvqszvbebxt1ovkkgqqfbtygvz3zp5abo6uyns4eljrpo7pphqnqc3jaod7p6jodiygxg3xsqjzgm7f5giwuih7gstu430rch7yb3ez75o8j7lk93g1ss04n96bt9n25txuz9g4jroomz9m5yw0we1et1wa33v19bviah48qrdq27sxz8mh6k87ed8d91oe5ocw7etc7h5o2u5k76ghq5709i4yamyl6t5ad545w1f9n5v4rougk69fkefbj747ga34cxbt13t6wm3mgxl2yabbdfbc8qoc4ak082kwqzlcp68d6fdlwy5jbxrzgw201bks2r8y7tpgb1k0wbqkdsue7d83fisx574vgl3bkqa159edyogcxj6mz4iznohcah179jp0nhn0oibt4q0ng38634b0luf9b5ts7zd8ikykkjttbocdlfgn5qtm0lv98tiaebckyb0tkuj0wz4f4mv866w4il1p1m5i0d84s21yhmvp17ztki0v4vpotus1muadiu5fwj1dcvfabygx744lnv3snvohezmmkfu1o05jkti12l9wbo6hoikz4ezevs14rp6z5n20jltqavs8hi8yaiclsecy4bm53gzz0uuhgc45720x0ojxh80hbcyncbuhxbob3ijvz3w6juoxrcf0hhnx2mltlfw1as3m5qg1iz1yz053h0nvdxi6zive7r8zl9qhptmvi54fcx6y12iwdtr8gbb3aicbj57pnlzesezp5mxhpjvdd8wlnspht2d9u2roya3f4qlr6ijbn8ub3q64iujaubh17sjcvn9zkozens76us0e99t0790247s2v9kx33trrwyfrh21ma0rqoqq6x9t2z658wirb0i995agiw4la9w7o7d3muj8at4161dy8wxzfs7xs0na3 00:11:51.569 16:04:21 -- dd/uring.sh@42 -- # echo 06ceu9c3litfklouvumnuelzdubhz3kavdu06fzl5ouec6c48of7y8pdc4p9gbs93apsq6y7xoz2izdpxpaa92gtay0khwl0wpgio2d6loeflvqszvbebxt1ovkkgqqfbtygvz3zp5abo6uyns4eljrpo7pphqnqc3jaod7p6jodiygxg3xsqjzgm7f5giwuih7gstu430rch7yb3ez75o8j7lk93g1ss04n96bt9n25txuz9g4jroomz9m5yw0we1et1wa33v19bviah48qrdq27sxz8mh6k87ed8d91oe5ocw7etc7h5o2u5k76ghq5709i4yamyl6t5ad545w1f9n5v4rougk69fkefbj747ga34cxbt13t6wm3mgxl2yabbdfbc8qoc4ak082kwqzlcp68d6fdlwy5jbxrzgw201bks2r8y7tpgb1k0wbqkdsue7d83fisx574vgl3bkqa159edyogcxj6mz4iznohcah179jp0nhn0oibt4q0ng38634b0luf9b5ts7zd8ikykkjttbocdlfgn5qtm0lv98tiaebckyb0tkuj0wz4f4mv866w4il1p1m5i0d84s21yhmvp17ztki0v4vpotus1muadiu5fwj1dcvfabygx744lnv3snvohezmmkfu1o05jkti12l9wbo6hoikz4ezevs14rp6z5n20jltqavs8hi8yaiclsecy4bm53gzz0uuhgc45720x0ojxh80hbcyncbuhxbob3ijvz3w6juoxrcf0hhnx2mltlfw1as3m5qg1iz1yz053h0nvdxi6zive7r8zl9qhptmvi54fcx6y12iwdtr8gbb3aicbj57pnlzesezp5mxhpjvdd8wlnspht2d9u2roya3f4qlr6ijbn8ub3q64iujaubh17sjcvn9zkozens76us0e99t0790247s2v9kx33trrwyfrh21ma0rqoqq6x9t2z658wirb0i995agiw4la9w7o7d3muj8at4161dy8wxzfs7xs0na3 00:11:51.569 16:04:21 -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:11:51.569 [2024-04-15 16:04:21.471364] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:51.569 [2024-04-15 16:04:21.471467] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76450 ] 00:11:51.827 [2024-04-15 16:04:21.621524] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.827 [2024-04-15 16:04:21.676818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.827 [2024-04-15 16:04:21.676902] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:53.049  Copying: 511/511 [MB] (average 1017 MBps) 00:11:53.049 00:11:53.049 16:04:22 -- dd/uring.sh@54 -- # gen_conf 00:11:53.049 16:04:22 -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:11:53.049 16:04:22 -- dd/common.sh@31 -- # xtrace_disable 00:11:53.049 16:04:22 -- common/autotest_common.sh@10 -- # set +x 00:11:53.049 [2024-04-15 16:04:22.817642] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:53.049 [2024-04-15 16:04:22.817745] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76466 ] 00:11:53.049 { 00:11:53.049 "subsystems": [ 00:11:53.049 { 00:11:53.049 "subsystem": "bdev", 00:11:53.049 "config": [ 00:11:53.049 { 00:11:53.049 "params": { 00:11:53.049 "block_size": 512, 00:11:53.049 "num_blocks": 1048576, 00:11:53.049 "name": "malloc0" 00:11:53.049 }, 00:11:53.049 "method": "bdev_malloc_create" 00:11:53.049 }, 00:11:53.049 { 00:11:53.049 "params": { 00:11:53.049 "filename": "/dev/zram1", 00:11:53.049 "name": "uring0" 00:11:53.049 }, 00:11:53.049 "method": "bdev_uring_create" 00:11:53.049 }, 00:11:53.049 { 00:11:53.049 "method": "bdev_wait_for_examine" 00:11:53.049 } 00:11:53.049 ] 00:11:53.049 } 00:11:53.049 ] 00:11:53.049 } 00:11:53.049 [2024-04-15 16:04:22.962054] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:53.049 [2024-04-15 16:04:23.008223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.049 [2024-04-15 16:04:23.008976] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:55.256  Copying: 314/512 [MB] (314 MBps) Copying: 512/512 [MB] (average 311 MBps) 00:11:55.256 00:11:55.256 16:04:25 -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:11:55.256 16:04:25 -- dd/uring.sh@60 -- # gen_conf 00:11:55.256 16:04:25 -- dd/common.sh@31 -- # xtrace_disable 00:11:55.256 16:04:25 -- common/autotest_common.sh@10 -- # set +x 00:11:55.515 { 00:11:55.515 "subsystems": [ 00:11:55.515 { 00:11:55.515 "subsystem": "bdev", 00:11:55.515 "config": [ 00:11:55.515 { 00:11:55.515 "params": { 00:11:55.515 "block_size": 512, 00:11:55.515 "num_blocks": 1048576, 00:11:55.515 "name": "malloc0" 00:11:55.515 }, 00:11:55.515 "method": "bdev_malloc_create" 00:11:55.515 }, 00:11:55.515 { 00:11:55.515 "params": { 00:11:55.515 "filename": "/dev/zram1", 00:11:55.515 "name": "uring0" 00:11:55.515 }, 00:11:55.515 "method": "bdev_uring_create" 00:11:55.515 }, 00:11:55.515 { 00:11:55.515 "method": "bdev_wait_for_examine" 00:11:55.515 } 00:11:55.515 ] 00:11:55.515 } 00:11:55.515 ] 00:11:55.515 } 00:11:55.515 [2024-04-15 16:04:25.259183] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:55.515 [2024-04-15 16:04:25.259281] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76505 ] 00:11:55.515 [2024-04-15 16:04:25.405353] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.515 [2024-04-15 16:04:25.454619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.515 [2024-04-15 16:04:25.455285] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:11:58.397  Copying: 233/512 [MB] (233 MBps) Copying: 481/512 [MB] (247 MBps) Copying: 512/512 [MB] (average 241 MBps) 00:11:58.397 00:11:58.397 16:04:28 -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:11:58.398 16:04:28 -- dd/uring.sh@66 -- # [[ 06ceu9c3litfklouvumnuelzdubhz3kavdu06fzl5ouec6c48of7y8pdc4p9gbs93apsq6y7xoz2izdpxpaa92gtay0khwl0wpgio2d6loeflvqszvbebxt1ovkkgqqfbtygvz3zp5abo6uyns4eljrpo7pphqnqc3jaod7p6jodiygxg3xsqjzgm7f5giwuih7gstu430rch7yb3ez75o8j7lk93g1ss04n96bt9n25txuz9g4jroomz9m5yw0we1et1wa33v19bviah48qrdq27sxz8mh6k87ed8d91oe5ocw7etc7h5o2u5k76ghq5709i4yamyl6t5ad545w1f9n5v4rougk69fkefbj747ga34cxbt13t6wm3mgxl2yabbdfbc8qoc4ak082kwqzlcp68d6fdlwy5jbxrzgw201bks2r8y7tpgb1k0wbqkdsue7d83fisx574vgl3bkqa159edyogcxj6mz4iznohcah179jp0nhn0oibt4q0ng38634b0luf9b5ts7zd8ikykkjttbocdlfgn5qtm0lv98tiaebckyb0tkuj0wz4f4mv866w4il1p1m5i0d84s21yhmvp17ztki0v4vpotus1muadiu5fwj1dcvfabygx744lnv3snvohezmmkfu1o05jkti12l9wbo6hoikz4ezevs14rp6z5n20jltqavs8hi8yaiclsecy4bm53gzz0uuhgc45720x0ojxh80hbcyncbuhxbob3ijvz3w6juoxrcf0hhnx2mltlfw1as3m5qg1iz1yz053h0nvdxi6zive7r8zl9qhptmvi54fcx6y12iwdtr8gbb3aicbj57pnlzesezp5mxhpjvdd8wlnspht2d9u2roya3f4qlr6ijbn8ub3q64iujaubh17sjcvn9zkozens76us0e99t0790247s2v9kx33trrwyfrh21ma0rqoqq6x9t2z658wirb0i995agiw4la9w7o7d3muj8at4161dy8wxzfs7xs0na3 == \0\6\c\e\u\9\c\3\l\i\t\f\k\l\o\u\v\u\m\n\u\e\l\z\d\u\b\h\z\3\k\a\v\d\u\0\6\f\z\l\5\o\u\e\c\6\c\4\8\o\f\7\y\8\p\d\c\4\p\9\g\b\s\9\3\a\p\s\q\6\y\7\x\o\z\2\i\z\d\p\x\p\a\a\9\2\g\t\a\y\0\k\h\w\l\0\w\p\g\i\o\2\d\6\l\o\e\f\l\v\q\s\z\v\b\e\b\x\t\1\o\v\k\k\g\q\q\f\b\t\y\g\v\z\3\z\p\5\a\b\o\6\u\y\n\s\4\e\l\j\r\p\o\7\p\p\h\q\n\q\c\3\j\a\o\d\7\p\6\j\o\d\i\y\g\x\g\3\x\s\q\j\z\g\m\7\f\5\g\i\w\u\i\h\7\g\s\t\u\4\3\0\r\c\h\7\y\b\3\e\z\7\5\o\8\j\7\l\k\9\3\g\1\s\s\0\4\n\9\6\b\t\9\n\2\5\t\x\u\z\9\g\4\j\r\o\o\m\z\9\m\5\y\w\0\w\e\1\e\t\1\w\a\3\3\v\1\9\b\v\i\a\h\4\8\q\r\d\q\2\7\s\x\z\8\m\h\6\k\8\7\e\d\8\d\9\1\o\e\5\o\c\w\7\e\t\c\7\h\5\o\2\u\5\k\7\6\g\h\q\5\7\0\9\i\4\y\a\m\y\l\6\t\5\a\d\5\4\5\w\1\f\9\n\5\v\4\r\o\u\g\k\6\9\f\k\e\f\b\j\7\4\7\g\a\3\4\c\x\b\t\1\3\t\6\w\m\3\m\g\x\l\2\y\a\b\b\d\f\b\c\8\q\o\c\4\a\k\0\8\2\k\w\q\z\l\c\p\6\8\d\6\f\d\l\w\y\5\j\b\x\r\z\g\w\2\0\1\b\k\s\2\r\8\y\7\t\p\g\b\1\k\0\w\b\q\k\d\s\u\e\7\d\8\3\f\i\s\x\5\7\4\v\g\l\3\b\k\q\a\1\5\9\e\d\y\o\g\c\x\j\6\m\z\4\i\z\n\o\h\c\a\h\1\7\9\j\p\0\n\h\n\0\o\i\b\t\4\q\0\n\g\3\8\6\3\4\b\0\l\u\f\9\b\5\t\s\7\z\d\8\i\k\y\k\k\j\t\t\b\o\c\d\l\f\g\n\5\q\t\m\0\l\v\9\8\t\i\a\e\b\c\k\y\b\0\t\k\u\j\0\w\z\4\f\4\m\v\8\6\6\w\4\i\l\1\p\1\m\5\i\0\d\8\4\s\2\1\y\h\m\v\p\1\7\z\t\k\i\0\v\4\v\p\o\t\u\s\1\m\u\a\d\i\u\5\f\w\j\1\d\c\v\f\a\b\y\g\x\7\4\4\l\n\v\3\s\n\v\o\h\e\z\m\m\k\f\u\1\o\0\5\j\k\t\i\1\2\l\9\w\b\o\6\h\o\i\k\z\4\e\z\e\v\s\1\4\r\p\6\z\5\n\2\0\j\l\t\q\a\v\s\8\h\i\8\y\a\i\c\l\s\e\c\y\4\b\m\5\3\g\z\z\0\u\u\h\g\c\4\5\7\2\0\x\0\o\j\x\h\8\0\h\b\c\y\n\c\b\u\h\x\b\o\b\3\i\j\v\z\3\w\6\j\u\o\x\r\c\f\0\h\h\n\x\2\m\l\t\l\f\w\1\a\s\3\m\5\q\g\1\i\z\1\y\z\0\5\3\h\0\n\v\d\x\i\6\z\i\v\e\7\r\8\z\l\9\q\h\p\t\m\v\i\5\4\f\c\x\6\y\1\2\i\w\d\t\r\8\g\b\b\3\a\i\c\b\j\5\7\p\n\l\z\e\s\e\z\p\5\m\x\h\p\j\v\d\d\8\w\l\n\s\p\h\t\2\d\9\u\2\r\o\y\a\3\f\4\q\l\r\6\i\j\b\n\8\u\b\3\q\6\4\i\u\j\a\u\b\h\1\7\s\j\c\v\n\9\z\k\o\z\e\n\s\7\6\u\s\0\e\9\9\t\0\7\9\0\2\4\7\s\2\v\9\k\x\3\3\t\r\r\w\y\f\r\h\2\1\m\a\0\r\q\o\q\q\6\x\9\t\2\z\6\5\8\w\i\r\b\0\i\9\9\5\a\g\i\w\4\l\a\9\w\7\o\7\d\3\m\u\j\8\a\t\4\1\6\1\d\y\8\w\x\z\f\s\7\x\s\0\n\a\3 ]] 00:11:58.398 16:04:28 -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:11:58.398 16:04:28 -- dd/uring.sh@69 -- # [[ 06ceu9c3litfklouvumnuelzdubhz3kavdu06fzl5ouec6c48of7y8pdc4p9gbs93apsq6y7xoz2izdpxpaa92gtay0khwl0wpgio2d6loeflvqszvbebxt1ovkkgqqfbtygvz3zp5abo6uyns4eljrpo7pphqnqc3jaod7p6jodiygxg3xsqjzgm7f5giwuih7gstu430rch7yb3ez75o8j7lk93g1ss04n96bt9n25txuz9g4jroomz9m5yw0we1et1wa33v19bviah48qrdq27sxz8mh6k87ed8d91oe5ocw7etc7h5o2u5k76ghq5709i4yamyl6t5ad545w1f9n5v4rougk69fkefbj747ga34cxbt13t6wm3mgxl2yabbdfbc8qoc4ak082kwqzlcp68d6fdlwy5jbxrzgw201bks2r8y7tpgb1k0wbqkdsue7d83fisx574vgl3bkqa159edyogcxj6mz4iznohcah179jp0nhn0oibt4q0ng38634b0luf9b5ts7zd8ikykkjttbocdlfgn5qtm0lv98tiaebckyb0tkuj0wz4f4mv866w4il1p1m5i0d84s21yhmvp17ztki0v4vpotus1muadiu5fwj1dcvfabygx744lnv3snvohezmmkfu1o05jkti12l9wbo6hoikz4ezevs14rp6z5n20jltqavs8hi8yaiclsecy4bm53gzz0uuhgc45720x0ojxh80hbcyncbuhxbob3ijvz3w6juoxrcf0hhnx2mltlfw1as3m5qg1iz1yz053h0nvdxi6zive7r8zl9qhptmvi54fcx6y12iwdtr8gbb3aicbj57pnlzesezp5mxhpjvdd8wlnspht2d9u2roya3f4qlr6ijbn8ub3q64iujaubh17sjcvn9zkozens76us0e99t0790247s2v9kx33trrwyfrh21ma0rqoqq6x9t2z658wirb0i995agiw4la9w7o7d3muj8at4161dy8wxzfs7xs0na3 == \0\6\c\e\u\9\c\3\l\i\t\f\k\l\o\u\v\u\m\n\u\e\l\z\d\u\b\h\z\3\k\a\v\d\u\0\6\f\z\l\5\o\u\e\c\6\c\4\8\o\f\7\y\8\p\d\c\4\p\9\g\b\s\9\3\a\p\s\q\6\y\7\x\o\z\2\i\z\d\p\x\p\a\a\9\2\g\t\a\y\0\k\h\w\l\0\w\p\g\i\o\2\d\6\l\o\e\f\l\v\q\s\z\v\b\e\b\x\t\1\o\v\k\k\g\q\q\f\b\t\y\g\v\z\3\z\p\5\a\b\o\6\u\y\n\s\4\e\l\j\r\p\o\7\p\p\h\q\n\q\c\3\j\a\o\d\7\p\6\j\o\d\i\y\g\x\g\3\x\s\q\j\z\g\m\7\f\5\g\i\w\u\i\h\7\g\s\t\u\4\3\0\r\c\h\7\y\b\3\e\z\7\5\o\8\j\7\l\k\9\3\g\1\s\s\0\4\n\9\6\b\t\9\n\2\5\t\x\u\z\9\g\4\j\r\o\o\m\z\9\m\5\y\w\0\w\e\1\e\t\1\w\a\3\3\v\1\9\b\v\i\a\h\4\8\q\r\d\q\2\7\s\x\z\8\m\h\6\k\8\7\e\d\8\d\9\1\o\e\5\o\c\w\7\e\t\c\7\h\5\o\2\u\5\k\7\6\g\h\q\5\7\0\9\i\4\y\a\m\y\l\6\t\5\a\d\5\4\5\w\1\f\9\n\5\v\4\r\o\u\g\k\6\9\f\k\e\f\b\j\7\4\7\g\a\3\4\c\x\b\t\1\3\t\6\w\m\3\m\g\x\l\2\y\a\b\b\d\f\b\c\8\q\o\c\4\a\k\0\8\2\k\w\q\z\l\c\p\6\8\d\6\f\d\l\w\y\5\j\b\x\r\z\g\w\2\0\1\b\k\s\2\r\8\y\7\t\p\g\b\1\k\0\w\b\q\k\d\s\u\e\7\d\8\3\f\i\s\x\5\7\4\v\g\l\3\b\k\q\a\1\5\9\e\d\y\o\g\c\x\j\6\m\z\4\i\z\n\o\h\c\a\h\1\7\9\j\p\0\n\h\n\0\o\i\b\t\4\q\0\n\g\3\8\6\3\4\b\0\l\u\f\9\b\5\t\s\7\z\d\8\i\k\y\k\k\j\t\t\b\o\c\d\l\f\g\n\5\q\t\m\0\l\v\9\8\t\i\a\e\b\c\k\y\b\0\t\k\u\j\0\w\z\4\f\4\m\v\8\6\6\w\4\i\l\1\p\1\m\5\i\0\d\8\4\s\2\1\y\h\m\v\p\1\7\z\t\k\i\0\v\4\v\p\o\t\u\s\1\m\u\a\d\i\u\5\f\w\j\1\d\c\v\f\a\b\y\g\x\7\4\4\l\n\v\3\s\n\v\o\h\e\z\m\m\k\f\u\1\o\0\5\j\k\t\i\1\2\l\9\w\b\o\6\h\o\i\k\z\4\e\z\e\v\s\1\4\r\p\6\z\5\n\2\0\j\l\t\q\a\v\s\8\h\i\8\y\a\i\c\l\s\e\c\y\4\b\m\5\3\g\z\z\0\u\u\h\g\c\4\5\7\2\0\x\0\o\j\x\h\8\0\h\b\c\y\n\c\b\u\h\x\b\o\b\3\i\j\v\z\3\w\6\j\u\o\x\r\c\f\0\h\h\n\x\2\m\l\t\l\f\w\1\a\s\3\m\5\q\g\1\i\z\1\y\z\0\5\3\h\0\n\v\d\x\i\6\z\i\v\e\7\r\8\z\l\9\q\h\p\t\m\v\i\5\4\f\c\x\6\y\1\2\i\w\d\t\r\8\g\b\b\3\a\i\c\b\j\5\7\p\n\l\z\e\s\e\z\p\5\m\x\h\p\j\v\d\d\8\w\l\n\s\p\h\t\2\d\9\u\2\r\o\y\a\3\f\4\q\l\r\6\i\j\b\n\8\u\b\3\q\6\4\i\u\j\a\u\b\h\1\7\s\j\c\v\n\9\z\k\o\z\e\n\s\7\6\u\s\0\e\9\9\t\0\7\9\0\2\4\7\s\2\v\9\k\x\3\3\t\r\r\w\y\f\r\h\2\1\m\a\0\r\q\o\q\q\6\x\9\t\2\z\6\5\8\w\i\r\b\0\i\9\9\5\a\g\i\w\4\l\a\9\w\7\o\7\d\3\m\u\j\8\a\t\4\1\6\1\d\y\8\w\x\z\f\s\7\x\s\0\n\a\3 ]] 00:11:58.398 16:04:28 -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:11:58.662 16:04:28 -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:11:58.662 16:04:28 -- dd/uring.sh@75 -- # gen_conf 00:11:58.662 16:04:28 -- dd/common.sh@31 -- # xtrace_disable 00:11:58.662 16:04:28 -- common/autotest_common.sh@10 -- # set +x 00:11:58.662 [2024-04-15 16:04:28.620379] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:11:58.662 [2024-04-15 16:04:28.620460] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76562 ] 00:11:58.921 { 00:11:58.921 "subsystems": [ 00:11:58.921 { 00:11:58.921 "subsystem": "bdev", 00:11:58.921 "config": [ 00:11:58.921 { 00:11:58.921 "params": { 00:11:58.921 "block_size": 512, 00:11:58.921 "num_blocks": 1048576, 00:11:58.921 "name": "malloc0" 00:11:58.921 }, 00:11:58.921 "method": "bdev_malloc_create" 00:11:58.921 }, 00:11:58.921 { 00:11:58.921 "params": { 00:11:58.921 "filename": "/dev/zram1", 00:11:58.921 "name": "uring0" 00:11:58.921 }, 00:11:58.921 "method": "bdev_uring_create" 00:11:58.921 }, 00:11:58.921 { 00:11:58.921 "method": "bdev_wait_for_examine" 00:11:58.921 } 00:11:58.921 ] 00:11:58.921 } 00:11:58.921 ] 00:11:58.921 } 00:11:58.921 [2024-04-15 16:04:28.754964] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:58.921 [2024-04-15 16:04:28.813243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.921 [2024-04-15 16:04:28.814089] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:12:02.156  Copying: 194/512 [MB] (194 MBps) Copying: 389/512 [MB] (194 MBps) Copying: 512/512 [MB] (average 196 MBps) 00:12:02.156 00:12:02.156 16:04:31 -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:12:02.156 16:04:31 -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:12:02.156 16:04:31 -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:12:02.156 16:04:31 -- dd/uring.sh@87 -- # : 00:12:02.156 16:04:31 -- dd/uring.sh@87 -- # gen_conf 00:12:02.156 16:04:31 -- dd/uring.sh@87 -- # : 00:12:02.156 16:04:31 -- dd/common.sh@31 -- # xtrace_disable 00:12:02.156 16:04:31 -- common/autotest_common.sh@10 -- # set +x 00:12:02.156 [2024-04-15 16:04:32.005070] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:12:02.156 [2024-04-15 16:04:32.005148] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76608 ] 00:12:02.156 { 00:12:02.156 "subsystems": [ 00:12:02.156 { 00:12:02.156 "subsystem": "bdev", 00:12:02.156 "config": [ 00:12:02.156 { 00:12:02.156 "params": { 00:12:02.156 "block_size": 512, 00:12:02.156 "num_blocks": 1048576, 00:12:02.156 "name": "malloc0" 00:12:02.156 }, 00:12:02.156 "method": "bdev_malloc_create" 00:12:02.156 }, 00:12:02.156 { 00:12:02.156 "params": { 00:12:02.156 "filename": "/dev/zram1", 00:12:02.156 "name": "uring0" 00:12:02.156 }, 00:12:02.156 "method": "bdev_uring_create" 00:12:02.156 }, 00:12:02.156 { 00:12:02.156 "params": { 00:12:02.156 "name": "uring0" 00:12:02.156 }, 00:12:02.156 "method": "bdev_uring_delete" 00:12:02.156 }, 00:12:02.156 { 00:12:02.156 "method": "bdev_wait_for_examine" 00:12:02.156 } 00:12:02.156 ] 00:12:02.156 } 00:12:02.156 ] 00:12:02.156 } 00:12:02.416 [2024-04-15 16:04:32.143190] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:02.416 [2024-04-15 16:04:32.190678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.416 [2024-04-15 16:04:32.191436] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:12:02.675 [2024-04-15 16:04:32.384054] bdev.c:4963:_tmp_bdev_event_cb: *NOTICE*: Unexpected event type: 0 00:12:02.933  Copying: 0/0 [B] (average 0 Bps) 00:12:02.933 00:12:02.933 16:04:32 -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:12:02.933 16:04:32 -- common/autotest_common.sh@638 -- # local es=0 00:12:02.933 16:04:32 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:12:02.933 16:04:32 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:02.933 16:04:32 -- dd/uring.sh@94 -- # : 00:12:02.933 16:04:32 -- dd/uring.sh@94 -- # gen_conf 00:12:02.933 16:04:32 -- dd/common.sh@31 -- # xtrace_disable 00:12:02.933 16:04:32 -- common/autotest_common.sh@10 -- # set +x 00:12:02.933 16:04:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:02.933 16:04:32 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:02.933 16:04:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:02.933 16:04:32 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:02.933 16:04:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:02.933 16:04:32 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:02.933 16:04:32 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:02.933 16:04:32 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:12:02.933 [2024-04-15 16:04:32.803498] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:12:02.933 [2024-04-15 16:04:32.803621] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76636 ] 00:12:02.933 { 00:12:02.933 "subsystems": [ 00:12:02.933 { 00:12:02.933 "subsystem": "bdev", 00:12:02.933 "config": [ 00:12:02.933 { 00:12:02.933 "params": { 00:12:02.933 "block_size": 512, 00:12:02.933 "num_blocks": 1048576, 00:12:02.933 "name": "malloc0" 00:12:02.933 }, 00:12:02.933 "method": "bdev_malloc_create" 00:12:02.933 }, 00:12:02.933 { 00:12:02.933 "params": { 00:12:02.933 "filename": "/dev/zram1", 00:12:02.933 "name": "uring0" 00:12:02.933 }, 00:12:02.933 "method": "bdev_uring_create" 00:12:02.933 }, 00:12:02.933 { 00:12:02.933 "params": { 00:12:02.933 "name": "uring0" 00:12:02.933 }, 00:12:02.933 "method": "bdev_uring_delete" 00:12:02.933 }, 00:12:02.933 { 00:12:02.933 "method": "bdev_wait_for_examine" 00:12:02.933 } 00:12:02.933 ] 00:12:02.933 } 00:12:02.933 ] 00:12:02.933 } 00:12:03.191 [2024-04-15 16:04:32.941655] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.191 [2024-04-15 16:04:32.993399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.191 [2024-04-15 16:04:32.994247] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:12:03.449 [2024-04-15 16:04:33.191062] bdev.c:4963:_tmp_bdev_event_cb: *NOTICE*: Unexpected event type: 0 00:12:03.449 [2024-04-15 16:04:33.207204] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:12:03.449 [2024-04-15 16:04:33.207254] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:12:03.449 [2024-04-15 16:04:33.207264] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:12:03.449 [2024-04-15 16:04:33.207276] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:03.708 [2024-04-15 16:04:33.460141] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:12:03.708 16:04:33 -- common/autotest_common.sh@641 -- # es=237 00:12:03.708 16:04:33 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:03.708 16:04:33 -- common/autotest_common.sh@650 -- # es=109 00:12:03.708 16:04:33 -- common/autotest_common.sh@651 -- # case "$es" in 00:12:03.708 16:04:33 -- common/autotest_common.sh@658 -- # es=1 00:12:03.708 16:04:33 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:03.708 16:04:33 -- dd/uring.sh@99 -- # remove_zram_dev 1 00:12:03.708 16:04:33 -- dd/common.sh@172 -- # local id=1 00:12:03.708 16:04:33 -- dd/common.sh@174 -- # [[ -e /sys/block/zram1 ]] 00:12:03.708 16:04:33 -- dd/common.sh@176 -- # echo 1 00:12:03.708 16:04:33 -- dd/common.sh@177 -- # echo 1 00:12:03.708 16:04:33 -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:12:03.966 00:12:03.966 real 0m12.458s 00:12:03.966 user 0m7.635s 00:12:03.966 sys 0m10.927s 00:12:03.966 16:04:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:03.966 16:04:33 -- common/autotest_common.sh@10 -- # set +x 00:12:03.966 ************************************ 00:12:03.966 END TEST dd_uring_copy 00:12:03.966 ************************************ 00:12:03.966 00:12:03.966 real 0m12.672s 00:12:03.966 user 0m7.720s 00:12:03.966 sys 0m11.049s 00:12:03.966 16:04:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:03.966 16:04:33 -- common/autotest_common.sh@10 -- # set +x 00:12:03.966 ************************************ 00:12:03.966 END TEST spdk_dd_uring 00:12:03.966 ************************************ 00:12:03.966 16:04:33 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:12:03.966 16:04:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:03.966 16:04:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:03.966 16:04:33 -- common/autotest_common.sh@10 -- # set +x 00:12:04.224 ************************************ 00:12:04.224 START TEST spdk_dd_sparse 00:12:04.224 ************************************ 00:12:04.224 16:04:33 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:12:04.224 * Looking for test storage... 00:12:04.224 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:12:04.224 16:04:34 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:04.224 16:04:34 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:04.224 16:04:34 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:04.224 16:04:34 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:04.224 16:04:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.224 16:04:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.224 16:04:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.224 16:04:34 -- paths/export.sh@5 -- # export PATH 00:12:04.224 16:04:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.224 16:04:34 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:12:04.224 16:04:34 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:12:04.224 16:04:34 -- dd/sparse.sh@110 -- # file1=file_zero1 00:12:04.224 16:04:34 -- dd/sparse.sh@111 -- # file2=file_zero2 00:12:04.224 16:04:34 -- dd/sparse.sh@112 -- # file3=file_zero3 00:12:04.224 16:04:34 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:12:04.224 16:04:34 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:12:04.224 16:04:34 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:12:04.224 16:04:34 -- dd/sparse.sh@118 -- # prepare 00:12:04.224 16:04:34 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:12:04.224 16:04:34 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:12:04.224 1+0 records in 00:12:04.224 1+0 records out 00:12:04.224 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00558462 s, 751 MB/s 00:12:04.225 16:04:34 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:12:04.225 1+0 records in 00:12:04.225 1+0 records out 00:12:04.225 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00465127 s, 902 MB/s 00:12:04.225 16:04:34 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:12:04.225 1+0 records in 00:12:04.225 1+0 records out 00:12:04.225 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00536484 s, 782 MB/s 00:12:04.225 16:04:34 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:12:04.225 16:04:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:04.225 16:04:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:04.225 16:04:34 -- common/autotest_common.sh@10 -- # set +x 00:12:04.225 ************************************ 00:12:04.225 START TEST dd_sparse_file_to_file 00:12:04.225 ************************************ 00:12:04.225 16:04:34 -- common/autotest_common.sh@1111 -- # file_to_file 00:12:04.225 16:04:34 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:12:04.225 16:04:34 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:12:04.225 16:04:34 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:12:04.225 16:04:34 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:12:04.225 16:04:34 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:12:04.225 16:04:34 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:12:04.225 16:04:34 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:12:04.225 16:04:34 -- dd/sparse.sh@41 -- # gen_conf 00:12:04.225 16:04:34 -- dd/common.sh@31 -- # xtrace_disable 00:12:04.225 16:04:34 -- common/autotest_common.sh@10 -- # set +x 00:12:04.482 [2024-04-15 16:04:34.228700] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:12:04.482 [2024-04-15 16:04:34.228829] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76733 ] 00:12:04.482 { 00:12:04.482 "subsystems": [ 00:12:04.482 { 00:12:04.482 "subsystem": "bdev", 00:12:04.482 "config": [ 00:12:04.482 { 00:12:04.482 "params": { 00:12:04.482 "block_size": 4096, 00:12:04.482 "filename": "dd_sparse_aio_disk", 00:12:04.482 "name": "dd_aio" 00:12:04.482 }, 00:12:04.482 "method": "bdev_aio_create" 00:12:04.482 }, 00:12:04.482 { 00:12:04.482 "params": { 00:12:04.482 "lvs_name": "dd_lvstore", 00:12:04.482 "bdev_name": "dd_aio" 00:12:04.482 }, 00:12:04.482 "method": "bdev_lvol_create_lvstore" 00:12:04.482 }, 00:12:04.482 { 00:12:04.482 "method": "bdev_wait_for_examine" 00:12:04.482 } 00:12:04.482 ] 00:12:04.482 } 00:12:04.482 ] 00:12:04.482 } 00:12:04.482 [2024-04-15 16:04:34.368636] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.482 [2024-04-15 16:04:34.428843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.482 [2024-04-15 16:04:34.429725] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:12:04.998  Copying: 12/36 [MB] (average 923 MBps) 00:12:04.998 00:12:04.998 16:04:34 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:12:04.998 16:04:34 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:12:04.998 16:04:34 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:12:04.998 16:04:34 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:12:04.998 16:04:34 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:12:04.998 16:04:34 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:12:04.998 16:04:34 -- dd/sparse.sh@52 -- # stat1_b=24576 00:12:04.998 16:04:34 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:12:04.998 16:04:34 -- dd/sparse.sh@53 -- # stat2_b=24576 00:12:04.998 16:04:34 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:12:04.998 00:12:04.998 real 0m0.616s 00:12:04.998 user 0m0.345s 00:12:04.998 sys 0m0.318s 00:12:04.998 16:04:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:04.998 16:04:34 -- common/autotest_common.sh@10 -- # set +x 00:12:04.998 ************************************ 00:12:04.998 END TEST dd_sparse_file_to_file 00:12:04.998 ************************************ 00:12:04.998 16:04:34 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:12:04.998 16:04:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:04.998 16:04:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:04.998 16:04:34 -- common/autotest_common.sh@10 -- # set +x 00:12:04.998 ************************************ 00:12:04.998 START TEST dd_sparse_file_to_bdev 00:12:04.998 ************************************ 00:12:04.998 16:04:34 -- common/autotest_common.sh@1111 -- # file_to_bdev 00:12:04.998 16:04:34 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:12:04.998 16:04:34 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:12:04.998 16:04:34 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size']='37748736' ['thin_provision']='true') 00:12:04.999 16:04:34 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:12:04.999 16:04:34 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:12:04.999 16:04:34 -- dd/sparse.sh@73 -- # gen_conf 00:12:04.999 16:04:34 -- dd/common.sh@31 -- # xtrace_disable 00:12:04.999 16:04:34 -- common/autotest_common.sh@10 -- # set +x 00:12:05.314 [2024-04-15 16:04:34.969161] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:12:05.314 [2024-04-15 16:04:34.969247] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76786 ] 00:12:05.314 { 00:12:05.314 "subsystems": [ 00:12:05.314 { 00:12:05.314 "subsystem": "bdev", 00:12:05.314 "config": [ 00:12:05.314 { 00:12:05.314 "params": { 00:12:05.314 "block_size": 4096, 00:12:05.314 "filename": "dd_sparse_aio_disk", 00:12:05.314 "name": "dd_aio" 00:12:05.314 }, 00:12:05.314 "method": "bdev_aio_create" 00:12:05.314 }, 00:12:05.314 { 00:12:05.314 "params": { 00:12:05.314 "lvs_name": "dd_lvstore", 00:12:05.314 "lvol_name": "dd_lvol", 00:12:05.314 "size": 37748736, 00:12:05.314 "thin_provision": true 00:12:05.314 }, 00:12:05.314 "method": "bdev_lvol_create" 00:12:05.314 }, 00:12:05.314 { 00:12:05.314 "method": "bdev_wait_for_examine" 00:12:05.314 } 00:12:05.314 ] 00:12:05.314 } 00:12:05.314 ] 00:12:05.314 } 00:12:05.314 [2024-04-15 16:04:35.104484] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.314 [2024-04-15 16:04:35.153060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.314 [2024-04-15 16:04:35.153817] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:12:05.314 [2024-04-15 16:04:35.236421] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:12:05.572  Copying: 12/36 [MB] (average 521 MBps)[2024-04-15 16:04:35.277348] app.c: 937:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:12:05.572 00:12:05.572 00:12:05.572 ************************************ 00:12:05.572 END TEST dd_sparse_file_to_bdev 00:12:05.572 ************************************ 00:12:05.572 00:12:05.572 real 0m0.540s 00:12:05.572 user 0m0.332s 00:12:05.572 sys 0m0.288s 00:12:05.572 16:04:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:05.572 16:04:35 -- common/autotest_common.sh@10 -- # set +x 00:12:05.572 16:04:35 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:12:05.572 16:04:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:05.572 16:04:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:05.572 16:04:35 -- common/autotest_common.sh@10 -- # set +x 00:12:05.831 ************************************ 00:12:05.831 START TEST dd_sparse_bdev_to_file 00:12:05.831 ************************************ 00:12:05.831 16:04:35 -- common/autotest_common.sh@1111 -- # bdev_to_file 00:12:05.831 16:04:35 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:12:05.831 16:04:35 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:12:05.831 16:04:35 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:12:05.831 16:04:35 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:12:05.831 16:04:35 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:12:05.831 16:04:35 -- dd/sparse.sh@91 -- # gen_conf 00:12:05.831 16:04:35 -- dd/common.sh@31 -- # xtrace_disable 00:12:05.831 16:04:35 -- common/autotest_common.sh@10 -- # set +x 00:12:05.831 [2024-04-15 16:04:35.644636] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:12:05.831 [2024-04-15 16:04:35.644751] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76823 ] 00:12:05.831 { 00:12:05.831 "subsystems": [ 00:12:05.831 { 00:12:05.831 "subsystem": "bdev", 00:12:05.831 "config": [ 00:12:05.831 { 00:12:05.831 "params": { 00:12:05.831 "block_size": 4096, 00:12:05.831 "filename": "dd_sparse_aio_disk", 00:12:05.831 "name": "dd_aio" 00:12:05.831 }, 00:12:05.831 "method": "bdev_aio_create" 00:12:05.831 }, 00:12:05.831 { 00:12:05.831 "method": "bdev_wait_for_examine" 00:12:05.831 } 00:12:05.831 ] 00:12:05.831 } 00:12:05.831 ] 00:12:05.831 } 00:12:05.831 [2024-04-15 16:04:35.788262] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:06.089 [2024-04-15 16:04:35.840628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.089 [2024-04-15 16:04:35.841445] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:12:06.348  Copying: 12/36 [MB] (average 923 MBps) 00:12:06.348 00:12:06.348 16:04:36 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:12:06.348 16:04:36 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:12:06.348 16:04:36 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:12:06.348 16:04:36 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:12:06.348 16:04:36 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:12:06.348 16:04:36 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:12:06.348 16:04:36 -- dd/sparse.sh@102 -- # stat2_b=24576 00:12:06.348 16:04:36 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:12:06.348 16:04:36 -- dd/sparse.sh@103 -- # stat3_b=24576 00:12:06.348 16:04:36 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:12:06.348 00:12:06.348 real 0m0.600s 00:12:06.348 user 0m0.342s 00:12:06.348 sys 0m0.344s 00:12:06.348 16:04:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:06.348 16:04:36 -- common/autotest_common.sh@10 -- # set +x 00:12:06.348 ************************************ 00:12:06.348 END TEST dd_sparse_bdev_to_file 00:12:06.348 ************************************ 00:12:06.348 16:04:36 -- dd/sparse.sh@1 -- # cleanup 00:12:06.348 16:04:36 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:12:06.348 16:04:36 -- dd/sparse.sh@12 -- # rm file_zero1 00:12:06.348 16:04:36 -- dd/sparse.sh@13 -- # rm file_zero2 00:12:06.348 16:04:36 -- dd/sparse.sh@14 -- # rm file_zero3 00:12:06.348 ************************************ 00:12:06.348 END TEST spdk_dd_sparse 00:12:06.348 00:12:06.348 real 0m2.272s 00:12:06.348 user 0m1.194s 00:12:06.348 sys 0m1.248s 00:12:06.348 16:04:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:06.348 16:04:36 -- common/autotest_common.sh@10 -- # set +x 00:12:06.348 ************************************ 00:12:06.348 16:04:36 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:12:06.348 16:04:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:06.348 16:04:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:06.348 16:04:36 -- common/autotest_common.sh@10 -- # set +x 00:12:06.607 ************************************ 00:12:06.607 START TEST spdk_dd_negative 00:12:06.607 ************************************ 00:12:06.607 16:04:36 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:12:06.607 * Looking for test storage... 00:12:06.607 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:12:06.607 16:04:36 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:06.607 16:04:36 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:06.607 16:04:36 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:06.607 16:04:36 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:06.607 16:04:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.607 16:04:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.607 16:04:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.607 16:04:36 -- paths/export.sh@5 -- # export PATH 00:12:06.607 16:04:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.607 16:04:36 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:12:06.607 16:04:36 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:12:06.607 16:04:36 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:12:06.607 16:04:36 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:12:06.607 16:04:36 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:12:06.607 16:04:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:06.607 16:04:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:06.607 16:04:36 -- common/autotest_common.sh@10 -- # set +x 00:12:06.866 ************************************ 00:12:06.866 START TEST dd_invalid_arguments 00:12:06.866 ************************************ 00:12:06.866 16:04:36 -- common/autotest_common.sh@1111 -- # invalid_arguments 00:12:06.866 16:04:36 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:12:06.866 16:04:36 -- common/autotest_common.sh@638 -- # local es=0 00:12:06.866 16:04:36 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:12:06.866 16:04:36 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:06.866 16:04:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:06.866 16:04:36 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:06.866 16:04:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:06.866 16:04:36 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:06.866 16:04:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:06.866 16:04:36 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:06.866 16:04:36 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:06.866 16:04:36 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:12:06.866 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:12:06.866 00:12:06.866 CPU options: 00:12:06.866 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:12:06.866 (like [0,1,10]) 00:12:06.866 --lcores lcore to CPU mapping list. The list is in the format: 00:12:06.866 [<,lcores[@CPUs]>...] 00:12:06.866 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:12:06.866 Within the group, '-' is used for range separator, 00:12:06.866 ',' is used for single number separator. 00:12:06.866 '( )' can be omitted for single element group, 00:12:06.866 '@' can be omitted if cpus and lcores have the same value 00:12:06.866 --disable-cpumask-locks Disable CPU core lock files. 00:12:06.866 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:12:06.866 pollers in the app support interrupt mode) 00:12:06.866 -p, --main-core main (primary) core for DPDK 00:12:06.866 00:12:06.866 Configuration options: 00:12:06.866 -c, --config, --json JSON config file 00:12:06.866 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:12:06.866 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:12:06.866 --wait-for-rpc wait for RPCs to initialize subsystems 00:12:06.866 --rpcs-allowed comma-separated list of permitted RPCS 00:12:06.866 --json-ignore-init-errors don't exit on invalid config entry 00:12:06.867 00:12:06.867 Memory options: 00:12:06.867 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:12:06.867 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:12:06.867 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:12:06.867 -R, --huge-unlink unlink huge files after initialization 00:12:06.867 -n, --mem-channels number of memory channels used for DPDK 00:12:06.867 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:12:06.867 --msg-mempool-size global message memory pool size in count (default: 262143) 00:12:06.867 --no-huge run without using hugepages 00:12:06.867 -i, --shm-id shared memory ID (optional) 00:12:06.867 -g, --single-file-segments force creating just one hugetlbfs file 00:12:06.867 00:12:06.867 PCI options: 00:12:06.867 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:12:06.867 -B, --pci-blocked pci addr to block (can be used more than once) 00:12:06.867 -u, --no-pci disable PCI access 00:12:06.867 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:12:06.867 00:12:06.867 Log options: 00:12:06.867 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:12:06.867 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:12:06.867 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:12:06.867 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:12:06.867 blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, 00:12:06.867 json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, 00:12:06.867 nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, sock_posix, 00:12:06.867 thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:12:06.867 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, 00:12:06.867 virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 00:12:06.867 virtio_vfio_user, vmd) 00:12:06.867 --silence-noticelog disable notice level logging to stderr 00:12:06.867 00:12:06.867 Trace options: 00:12:06.867 --num-trace-entries number of trace entries for each core, must be power of 2, 00:12:06.867 setting 0 to disable trace (default 32768) 00:12:06.867 Tracepoints vary in size and can use more than one trace entry. 00:12:06.867 -e, --tpoint-group [:] 00:12:06.867 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:12:06.867 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:12:06.867 [2024-04-15 16:04:36.645369] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:12:06.867 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 00:12:06.867 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:12:06.867 a tracepoint group. First tpoint inside a group can be enabled by 00:12:06.867 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:12:06.867 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:12:06.867 in /include/spdk_internal/trace_defs.h 00:12:06.867 00:12:06.867 Other options: 00:12:06.867 -h, --help show this usage 00:12:06.867 -v, --version print SPDK version 00:12:06.867 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:12:06.867 --env-context Opaque context for use of the env implementation 00:12:06.867 00:12:06.867 Application specific: 00:12:06.867 [--------- DD Options ---------] 00:12:06.867 --if Input file. Must specify either --if or --ib. 00:12:06.867 --ib Input bdev. Must specifier either --if or --ib 00:12:06.867 --of Output file. Must specify either --of or --ob. 00:12:06.867 --ob Output bdev. Must specify either --of or --ob. 00:12:06.867 --iflag Input file flags. 00:12:06.867 --oflag Output file flags. 00:12:06.867 --bs I/O unit size (default: 4096) 00:12:06.867 --qd Queue depth (default: 2) 00:12:06.867 --count I/O unit count. The number of I/O units to copy. (default: all) 00:12:06.867 --skip Skip this many I/O units at start of input. (default: 0) 00:12:06.867 --seek Skip this many I/O units at start of output. (default: 0) 00:12:06.867 --aio Force usage of AIO. (by default io_uring is used if available) 00:12:06.867 --sparse Enable hole skipping in input target 00:12:06.867 Available iflag and oflag values: 00:12:06.867 append - append mode 00:12:06.867 direct - use direct I/O for data 00:12:06.867 directory - fail unless a directory 00:12:06.867 dsync - use synchronized I/O for data 00:12:06.867 noatime - do not update access time 00:12:06.867 noctty - do not assign controlling terminal from file 00:12:06.867 nofollow - do not follow symlinks 00:12:06.867 nonblock - use non-blocking I/O 00:12:06.867 sync - use synchronized I/O for data and metadata 00:12:06.867 16:04:36 -- common/autotest_common.sh@641 -- # es=2 00:12:06.867 16:04:36 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:06.867 16:04:36 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:06.867 16:04:36 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:06.867 00:12:06.867 real 0m0.079s 00:12:06.867 user 0m0.042s 00:12:06.867 sys 0m0.032s 00:12:06.867 16:04:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:06.867 16:04:36 -- common/autotest_common.sh@10 -- # set +x 00:12:06.867 ************************************ 00:12:06.867 END TEST dd_invalid_arguments 00:12:06.867 ************************************ 00:12:06.867 16:04:36 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:12:06.867 16:04:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:06.867 16:04:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:06.867 16:04:36 -- common/autotest_common.sh@10 -- # set +x 00:12:06.867 ************************************ 00:12:06.867 START TEST dd_double_input 00:12:06.867 ************************************ 00:12:06.867 16:04:36 -- common/autotest_common.sh@1111 -- # double_input 00:12:06.867 16:04:36 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:12:06.867 16:04:36 -- common/autotest_common.sh@638 -- # local es=0 00:12:06.867 16:04:36 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:12:06.867 16:04:36 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:06.867 16:04:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:06.867 16:04:36 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:06.867 16:04:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:06.867 16:04:36 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:06.867 16:04:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:06.867 16:04:36 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:06.867 16:04:36 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:06.867 16:04:36 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:12:07.125 [2024-04-15 16:04:36.847137] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:12:07.125 16:04:36 -- common/autotest_common.sh@641 -- # es=22 00:12:07.125 16:04:36 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:07.125 16:04:36 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:07.125 16:04:36 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:07.125 00:12:07.125 real 0m0.062s 00:12:07.125 user 0m0.030s 00:12:07.125 sys 0m0.029s 00:12:07.125 16:04:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:07.125 16:04:36 -- common/autotest_common.sh@10 -- # set +x 00:12:07.125 ************************************ 00:12:07.125 END TEST dd_double_input 00:12:07.125 ************************************ 00:12:07.125 16:04:36 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:12:07.125 16:04:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:07.125 16:04:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:07.125 16:04:36 -- common/autotest_common.sh@10 -- # set +x 00:12:07.125 ************************************ 00:12:07.125 START TEST dd_double_output 00:12:07.125 ************************************ 00:12:07.125 16:04:36 -- common/autotest_common.sh@1111 -- # double_output 00:12:07.125 16:04:36 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:12:07.125 16:04:36 -- common/autotest_common.sh@638 -- # local es=0 00:12:07.125 16:04:36 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:12:07.125 16:04:36 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:07.125 16:04:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:07.125 16:04:36 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:07.125 16:04:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:07.125 16:04:36 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:07.125 16:04:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:07.125 16:04:36 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:07.125 16:04:36 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:07.125 16:04:36 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:12:07.125 [2024-04-15 16:04:37.043525] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:12:07.125 16:04:37 -- common/autotest_common.sh@641 -- # es=22 00:12:07.125 16:04:37 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:07.125 16:04:37 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:07.125 16:04:37 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:07.125 00:12:07.125 real 0m0.078s 00:12:07.125 user 0m0.049s 00:12:07.125 sys 0m0.028s 00:12:07.125 16:04:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:07.125 ************************************ 00:12:07.125 16:04:37 -- common/autotest_common.sh@10 -- # set +x 00:12:07.125 END TEST dd_double_output 00:12:07.125 ************************************ 00:12:07.384 16:04:37 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:12:07.384 16:04:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:07.384 16:04:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:07.384 16:04:37 -- common/autotest_common.sh@10 -- # set +x 00:12:07.384 ************************************ 00:12:07.384 START TEST dd_no_input 00:12:07.384 ************************************ 00:12:07.384 16:04:37 -- common/autotest_common.sh@1111 -- # no_input 00:12:07.384 16:04:37 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:12:07.384 16:04:37 -- common/autotest_common.sh@638 -- # local es=0 00:12:07.384 16:04:37 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:12:07.384 16:04:37 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:07.384 16:04:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:07.384 16:04:37 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:07.384 16:04:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:07.384 16:04:37 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:07.384 16:04:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:07.384 16:04:37 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:07.384 16:04:37 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:07.384 16:04:37 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:12:07.384 [2024-04-15 16:04:37.232730] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:12:07.384 16:04:37 -- common/autotest_common.sh@641 -- # es=22 00:12:07.384 16:04:37 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:07.384 16:04:37 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:07.384 16:04:37 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:07.384 00:12:07.384 real 0m0.060s 00:12:07.384 user 0m0.030s 00:12:07.384 sys 0m0.028s 00:12:07.384 16:04:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:07.384 16:04:37 -- common/autotest_common.sh@10 -- # set +x 00:12:07.384 ************************************ 00:12:07.384 END TEST dd_no_input 00:12:07.384 ************************************ 00:12:07.385 16:04:37 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:12:07.385 16:04:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:07.385 16:04:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:07.385 16:04:37 -- common/autotest_common.sh@10 -- # set +x 00:12:07.643 ************************************ 00:12:07.643 START TEST dd_no_output 00:12:07.643 ************************************ 00:12:07.643 16:04:37 -- common/autotest_common.sh@1111 -- # no_output 00:12:07.643 16:04:37 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:12:07.643 16:04:37 -- common/autotest_common.sh@638 -- # local es=0 00:12:07.643 16:04:37 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:12:07.643 16:04:37 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:07.643 16:04:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:07.643 16:04:37 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:07.643 16:04:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:07.643 16:04:37 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:07.643 16:04:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:07.643 16:04:37 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:07.643 16:04:37 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:07.643 16:04:37 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:12:07.643 [2024-04-15 16:04:37.408758] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:12:07.643 16:04:37 -- common/autotest_common.sh@641 -- # es=22 00:12:07.643 16:04:37 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:07.643 16:04:37 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:07.643 ************************************ 00:12:07.643 END TEST dd_no_output 00:12:07.643 ************************************ 00:12:07.643 16:04:37 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:07.643 00:12:07.643 real 0m0.062s 00:12:07.643 user 0m0.031s 00:12:07.643 sys 0m0.030s 00:12:07.643 16:04:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:07.643 16:04:37 -- common/autotest_common.sh@10 -- # set +x 00:12:07.643 16:04:37 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:12:07.643 16:04:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:07.643 16:04:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:07.643 16:04:37 -- common/autotest_common.sh@10 -- # set +x 00:12:07.643 ************************************ 00:12:07.643 START TEST dd_wrong_blocksize 00:12:07.643 ************************************ 00:12:07.643 16:04:37 -- common/autotest_common.sh@1111 -- # wrong_blocksize 00:12:07.643 16:04:37 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:12:07.643 16:04:37 -- common/autotest_common.sh@638 -- # local es=0 00:12:07.643 16:04:37 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:12:07.643 16:04:37 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:07.643 16:04:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:07.643 16:04:37 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:07.643 16:04:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:07.643 16:04:37 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:07.643 16:04:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:07.643 16:04:37 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:07.643 16:04:37 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:07.643 16:04:37 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:12:07.643 [2024-04-15 16:04:37.603050] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:12:07.901 16:04:37 -- common/autotest_common.sh@641 -- # es=22 00:12:07.901 16:04:37 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:07.901 16:04:37 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:07.901 16:04:37 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:07.901 00:12:07.901 real 0m0.068s 00:12:07.901 user 0m0.044s 00:12:07.901 sys 0m0.024s 00:12:07.901 16:04:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:07.901 16:04:37 -- common/autotest_common.sh@10 -- # set +x 00:12:07.901 ************************************ 00:12:07.901 END TEST dd_wrong_blocksize 00:12:07.901 ************************************ 00:12:07.901 16:04:37 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:12:07.901 16:04:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:07.901 16:04:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:07.901 16:04:37 -- common/autotest_common.sh@10 -- # set +x 00:12:07.901 ************************************ 00:12:07.901 START TEST dd_smaller_blocksize 00:12:07.901 ************************************ 00:12:07.901 16:04:37 -- common/autotest_common.sh@1111 -- # smaller_blocksize 00:12:07.901 16:04:37 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:12:07.901 16:04:37 -- common/autotest_common.sh@638 -- # local es=0 00:12:07.901 16:04:37 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:12:07.901 16:04:37 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:07.901 16:04:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:07.901 16:04:37 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:07.901 16:04:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:07.901 16:04:37 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:07.901 16:04:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:07.901 16:04:37 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:07.901 16:04:37 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:07.901 16:04:37 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:12:07.901 [2024-04-15 16:04:37.800802] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:12:07.901 [2024-04-15 16:04:37.801497] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77082 ] 00:12:08.159 [2024-04-15 16:04:37.943890] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:08.159 [2024-04-15 16:04:38.005207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.159 [2024-04-15 16:04:38.005303] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:12:08.159 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:12:08.159 [2024-04-15 16:04:38.083527] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:12:08.159 [2024-04-15 16:04:38.083565] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:08.417 [2024-04-15 16:04:38.184724] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:12:08.417 16:04:38 -- common/autotest_common.sh@641 -- # es=244 00:12:08.417 16:04:38 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:08.417 16:04:38 -- common/autotest_common.sh@650 -- # es=116 00:12:08.417 16:04:38 -- common/autotest_common.sh@651 -- # case "$es" in 00:12:08.417 16:04:38 -- common/autotest_common.sh@658 -- # es=1 00:12:08.417 16:04:38 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:08.417 00:12:08.417 real 0m0.536s 00:12:08.417 user 0m0.278s 00:12:08.417 sys 0m0.152s 00:12:08.417 16:04:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:08.417 16:04:38 -- common/autotest_common.sh@10 -- # set +x 00:12:08.417 ************************************ 00:12:08.417 END TEST dd_smaller_blocksize 00:12:08.417 ************************************ 00:12:08.417 16:04:38 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:12:08.417 16:04:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:08.417 16:04:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:08.417 16:04:38 -- common/autotest_common.sh@10 -- # set +x 00:12:08.676 ************************************ 00:12:08.676 START TEST dd_invalid_count 00:12:08.676 ************************************ 00:12:08.676 16:04:38 -- common/autotest_common.sh@1111 -- # invalid_count 00:12:08.676 16:04:38 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:12:08.676 16:04:38 -- common/autotest_common.sh@638 -- # local es=0 00:12:08.676 16:04:38 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:12:08.676 16:04:38 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:08.676 16:04:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:08.676 16:04:38 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:08.676 16:04:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:08.676 16:04:38 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:08.676 16:04:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:08.676 16:04:38 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:08.676 16:04:38 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:08.676 16:04:38 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:12:08.676 [2024-04-15 16:04:38.442336] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:12:08.676 16:04:38 -- common/autotest_common.sh@641 -- # es=22 00:12:08.676 ************************************ 00:12:08.676 END TEST dd_invalid_count 00:12:08.676 16:04:38 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:08.676 16:04:38 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:08.676 16:04:38 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:08.676 00:12:08.676 real 0m0.055s 00:12:08.676 user 0m0.032s 00:12:08.676 sys 0m0.022s 00:12:08.676 16:04:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:08.676 16:04:38 -- common/autotest_common.sh@10 -- # set +x 00:12:08.676 ************************************ 00:12:08.676 16:04:38 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:12:08.676 16:04:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:08.676 16:04:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:08.676 16:04:38 -- common/autotest_common.sh@10 -- # set +x 00:12:08.676 ************************************ 00:12:08.676 START TEST dd_invalid_oflag 00:12:08.676 ************************************ 00:12:08.676 16:04:38 -- common/autotest_common.sh@1111 -- # invalid_oflag 00:12:08.676 16:04:38 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:12:08.676 16:04:38 -- common/autotest_common.sh@638 -- # local es=0 00:12:08.676 16:04:38 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:12:08.676 16:04:38 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:08.676 16:04:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:08.676 16:04:38 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:08.676 16:04:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:08.676 16:04:38 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:08.676 16:04:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:08.676 16:04:38 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:08.676 16:04:38 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:08.676 16:04:38 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:12:08.676 [2024-04-15 16:04:38.629560] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:12:08.935 16:04:38 -- common/autotest_common.sh@641 -- # es=22 00:12:08.935 16:04:38 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:08.935 16:04:38 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:08.935 16:04:38 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:08.935 00:12:08.935 real 0m0.085s 00:12:08.935 user 0m0.042s 00:12:08.935 sys 0m0.041s 00:12:08.935 16:04:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:08.935 16:04:38 -- common/autotest_common.sh@10 -- # set +x 00:12:08.935 ************************************ 00:12:08.935 END TEST dd_invalid_oflag 00:12:08.935 ************************************ 00:12:08.935 16:04:38 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:12:08.935 16:04:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:08.935 16:04:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:08.935 16:04:38 -- common/autotest_common.sh@10 -- # set +x 00:12:08.935 ************************************ 00:12:08.935 START TEST dd_invalid_iflag 00:12:08.935 ************************************ 00:12:08.935 16:04:38 -- common/autotest_common.sh@1111 -- # invalid_iflag 00:12:08.935 16:04:38 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:12:08.935 16:04:38 -- common/autotest_common.sh@638 -- # local es=0 00:12:08.935 16:04:38 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:12:08.935 16:04:38 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:08.935 16:04:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:08.935 16:04:38 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:08.935 16:04:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:08.935 16:04:38 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:08.935 16:04:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:08.935 16:04:38 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:08.935 16:04:38 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:08.935 16:04:38 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:12:08.935 [2024-04-15 16:04:38.837314] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:12:08.935 16:04:38 -- common/autotest_common.sh@641 -- # es=22 00:12:08.935 16:04:38 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:08.935 16:04:38 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:08.935 16:04:38 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:08.935 00:12:08.935 real 0m0.071s 00:12:08.935 user 0m0.045s 00:12:08.935 sys 0m0.025s 00:12:08.935 16:04:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:08.935 16:04:38 -- common/autotest_common.sh@10 -- # set +x 00:12:08.935 ************************************ 00:12:08.935 END TEST dd_invalid_iflag 00:12:08.935 ************************************ 00:12:09.194 16:04:38 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:12:09.194 16:04:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:09.194 16:04:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:09.194 16:04:38 -- common/autotest_common.sh@10 -- # set +x 00:12:09.194 ************************************ 00:12:09.194 START TEST dd_unknown_flag 00:12:09.194 ************************************ 00:12:09.194 16:04:38 -- common/autotest_common.sh@1111 -- # unknown_flag 00:12:09.194 16:04:38 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:12:09.194 16:04:38 -- common/autotest_common.sh@638 -- # local es=0 00:12:09.194 16:04:38 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:12:09.194 16:04:38 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:09.194 16:04:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:09.194 16:04:38 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:09.194 16:04:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:09.194 16:04:38 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:09.194 16:04:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:09.194 16:04:39 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:09.194 16:04:39 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:09.194 16:04:39 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:12:09.194 [2024-04-15 16:04:39.052358] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:12:09.194 [2024-04-15 16:04:39.052458] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77191 ] 00:12:09.454 [2024-04-15 16:04:39.193013] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:09.454 [2024-04-15 16:04:39.240561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.454 [2024-04-15 16:04:39.240643] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:12:09.454 [2024-04-15 16:04:39.306479] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:12:09.454 [2024-04-15 16:04:39.306540] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:09.454 [2024-04-15 16:04:39.306601] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:12:09.454 [2024-04-15 16:04:39.306615] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:09.454 [2024-04-15 16:04:39.306850] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:12:09.454 [2024-04-15 16:04:39.306865] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:09.454 [2024-04-15 16:04:39.306920] app.c: 953:app_stop: *NOTICE*: spdk_app_stop called twice 00:12:09.454 [2024-04-15 16:04:39.306929] app.c: 953:app_stop: *NOTICE*: spdk_app_stop called twice 00:12:09.454 [2024-04-15 16:04:39.399880] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:12:09.713 16:04:39 -- common/autotest_common.sh@641 -- # es=234 00:12:09.713 16:04:39 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:09.713 16:04:39 -- common/autotest_common.sh@650 -- # es=106 00:12:09.713 16:04:39 -- common/autotest_common.sh@651 -- # case "$es" in 00:12:09.713 16:04:39 -- common/autotest_common.sh@658 -- # es=1 00:12:09.713 16:04:39 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:09.713 00:12:09.713 real 0m0.490s 00:12:09.713 user 0m0.246s 00:12:09.713 sys 0m0.150s 00:12:09.713 16:04:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:09.713 ************************************ 00:12:09.713 END TEST dd_unknown_flag 00:12:09.713 ************************************ 00:12:09.713 16:04:39 -- common/autotest_common.sh@10 -- # set +x 00:12:09.713 16:04:39 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:12:09.713 16:04:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:09.713 16:04:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:09.713 16:04:39 -- common/autotest_common.sh@10 -- # set +x 00:12:09.713 ************************************ 00:12:09.713 START TEST dd_invalid_json 00:12:09.713 ************************************ 00:12:09.713 16:04:39 -- common/autotest_common.sh@1111 -- # invalid_json 00:12:09.713 16:04:39 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:12:09.713 16:04:39 -- dd/negative_dd.sh@95 -- # : 00:12:09.713 16:04:39 -- common/autotest_common.sh@638 -- # local es=0 00:12:09.713 16:04:39 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:12:09.713 16:04:39 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:09.713 16:04:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:09.713 16:04:39 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:09.713 16:04:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:09.713 16:04:39 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:09.713 16:04:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:09.713 16:04:39 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:09.713 16:04:39 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:09.713 16:04:39 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:12:09.713 [2024-04-15 16:04:39.651378] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:12:09.713 [2024-04-15 16:04:39.651459] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77229 ] 00:12:09.971 [2024-04-15 16:04:39.794737] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:09.971 [2024-04-15 16:04:39.847510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.971 [2024-04-15 16:04:39.847603] json_config.c: 515:parse_json: *ERROR*: JSON data cannot be empty 00:12:09.971 [2024-04-15 16:04:39.847623] rpc.c: 193:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:12:09.971 [2024-04-15 16:04:39.847637] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:09.971 [2024-04-15 16:04:39.847681] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:12:09.971 16:04:39 -- common/autotest_common.sh@641 -- # es=234 00:12:09.971 16:04:39 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:09.971 16:04:39 -- common/autotest_common.sh@650 -- # es=106 00:12:09.971 16:04:39 -- common/autotest_common.sh@651 -- # case "$es" in 00:12:09.971 16:04:39 -- common/autotest_common.sh@658 -- # es=1 00:12:09.971 16:04:39 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:10.229 00:12:10.229 real 0m0.335s 00:12:10.230 user 0m0.137s 00:12:10.230 sys 0m0.095s 00:12:10.230 16:04:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:10.230 16:04:39 -- common/autotest_common.sh@10 -- # set +x 00:12:10.230 ************************************ 00:12:10.230 END TEST dd_invalid_json 00:12:10.230 ************************************ 00:12:10.230 00:12:10.230 real 0m3.584s 00:12:10.230 user 0m1.532s 00:12:10.230 sys 0m1.540s 00:12:10.230 16:04:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:10.230 ************************************ 00:12:10.230 END TEST spdk_dd_negative 00:12:10.230 ************************************ 00:12:10.230 16:04:39 -- common/autotest_common.sh@10 -- # set +x 00:12:10.230 00:12:10.230 real 1m10.457s 00:12:10.230 user 0m41.728s 00:12:10.230 sys 0m31.845s 00:12:10.230 16:04:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:10.230 16:04:40 -- common/autotest_common.sh@10 -- # set +x 00:12:10.230 ************************************ 00:12:10.230 END TEST spdk_dd 00:12:10.230 ************************************ 00:12:10.230 16:04:40 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:12:10.230 16:04:40 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:12:10.230 16:04:40 -- spdk/autotest.sh@258 -- # timing_exit lib 00:12:10.230 16:04:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:10.230 16:04:40 -- common/autotest_common.sh@10 -- # set +x 00:12:10.230 16:04:40 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:12:10.230 16:04:40 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:12:10.230 16:04:40 -- spdk/autotest.sh@277 -- # '[' 1 -eq 1 ']' 00:12:10.230 16:04:40 -- spdk/autotest.sh@278 -- # export NET_TYPE 00:12:10.230 16:04:40 -- spdk/autotest.sh@281 -- # '[' tcp = rdma ']' 00:12:10.230 16:04:40 -- spdk/autotest.sh@284 -- # '[' tcp = tcp ']' 00:12:10.230 16:04:40 -- spdk/autotest.sh@285 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:12:10.230 16:04:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:10.230 16:04:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:10.230 16:04:40 -- common/autotest_common.sh@10 -- # set +x 00:12:10.489 ************************************ 00:12:10.489 START TEST nvmf_tcp 00:12:10.489 ************************************ 00:12:10.489 16:04:40 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:12:10.489 * Looking for test storage... 00:12:10.489 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:12:10.489 16:04:40 -- nvmf/nvmf.sh@10 -- # uname -s 00:12:10.489 16:04:40 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:12:10.489 16:04:40 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:10.489 16:04:40 -- nvmf/common.sh@7 -- # uname -s 00:12:10.489 16:04:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:10.489 16:04:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:10.489 16:04:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:10.489 16:04:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:10.489 16:04:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:10.489 16:04:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:10.489 16:04:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:10.489 16:04:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:10.489 16:04:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:10.489 16:04:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:10.489 16:04:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5cf30adc-e0ee-4255-9853-178d7d30983a 00:12:10.489 16:04:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=5cf30adc-e0ee-4255-9853-178d7d30983a 00:12:10.489 16:04:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:10.489 16:04:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:10.489 16:04:40 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:10.489 16:04:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:10.489 16:04:40 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:10.489 16:04:40 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:10.489 16:04:40 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:10.489 16:04:40 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:10.489 16:04:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.489 16:04:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.490 16:04:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.490 16:04:40 -- paths/export.sh@5 -- # export PATH 00:12:10.490 16:04:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.490 16:04:40 -- nvmf/common.sh@47 -- # : 0 00:12:10.490 16:04:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:10.490 16:04:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:10.490 16:04:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:10.490 16:04:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:10.490 16:04:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:10.490 16:04:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:10.490 16:04:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:10.490 16:04:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:10.490 16:04:40 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:12:10.490 16:04:40 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:12:10.490 16:04:40 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:12:10.490 16:04:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:10.490 16:04:40 -- common/autotest_common.sh@10 -- # set +x 00:12:10.490 16:04:40 -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 00:12:10.490 16:04:40 -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:10.490 16:04:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:10.490 16:04:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:10.490 16:04:40 -- common/autotest_common.sh@10 -- # set +x 00:12:10.490 ************************************ 00:12:10.490 START TEST nvmf_host_management 00:12:10.490 ************************************ 00:12:10.490 16:04:40 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:10.749 * Looking for test storage... 00:12:10.749 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:10.749 16:04:40 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:10.749 16:04:40 -- nvmf/common.sh@7 -- # uname -s 00:12:10.749 16:04:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:10.749 16:04:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:10.749 16:04:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:10.749 16:04:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:10.749 16:04:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:10.749 16:04:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:10.749 16:04:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:10.749 16:04:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:10.749 16:04:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:10.749 16:04:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:10.749 16:04:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5cf30adc-e0ee-4255-9853-178d7d30983a 00:12:10.749 16:04:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=5cf30adc-e0ee-4255-9853-178d7d30983a 00:12:10.749 16:04:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:10.749 16:04:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:10.749 16:04:40 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:10.749 16:04:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:10.749 16:04:40 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:10.749 16:04:40 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:10.749 16:04:40 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:10.749 16:04:40 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:10.749 16:04:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.749 16:04:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.749 16:04:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.749 16:04:40 -- paths/export.sh@5 -- # export PATH 00:12:10.749 16:04:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.749 16:04:40 -- nvmf/common.sh@47 -- # : 0 00:12:10.749 16:04:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:10.749 16:04:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:10.749 16:04:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:10.749 16:04:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:10.749 16:04:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:10.749 16:04:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:10.749 16:04:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:10.749 16:04:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:10.749 16:04:40 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:10.749 16:04:40 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:10.749 16:04:40 -- target/host_management.sh@104 -- # nvmftestinit 00:12:10.749 16:04:40 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:10.749 16:04:40 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:10.749 16:04:40 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:10.749 16:04:40 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:10.749 16:04:40 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:10.749 16:04:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:10.749 16:04:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:10.749 16:04:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:10.749 16:04:40 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:12:10.749 16:04:40 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:12:10.749 16:04:40 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:12:10.749 16:04:40 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:12:10.749 16:04:40 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:12:10.749 16:04:40 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:12:10.749 16:04:40 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:10.749 16:04:40 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:10.749 16:04:40 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:10.749 16:04:40 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:10.749 16:04:40 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:10.750 16:04:40 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:10.750 16:04:40 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:10.750 16:04:40 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:10.750 16:04:40 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:10.750 16:04:40 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:10.750 16:04:40 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:10.750 16:04:40 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:10.750 16:04:40 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:10.750 Cannot find device "nvmf_init_br" 00:12:10.750 16:04:40 -- nvmf/common.sh@154 -- # true 00:12:10.750 16:04:40 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:10.750 Cannot find device "nvmf_tgt_br" 00:12:10.750 16:04:40 -- nvmf/common.sh@155 -- # true 00:12:10.750 16:04:40 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:10.750 Cannot find device "nvmf_tgt_br2" 00:12:10.750 16:04:40 -- nvmf/common.sh@156 -- # true 00:12:10.750 16:04:40 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:10.750 Cannot find device "nvmf_init_br" 00:12:10.750 16:04:40 -- nvmf/common.sh@157 -- # true 00:12:10.750 16:04:40 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:10.750 Cannot find device "nvmf_tgt_br" 00:12:10.750 16:04:40 -- nvmf/common.sh@158 -- # true 00:12:10.750 16:04:40 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:10.750 Cannot find device "nvmf_tgt_br2" 00:12:10.750 16:04:40 -- nvmf/common.sh@159 -- # true 00:12:10.750 16:04:40 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:10.750 Cannot find device "nvmf_br" 00:12:10.750 16:04:40 -- nvmf/common.sh@160 -- # true 00:12:10.750 16:04:40 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:10.750 Cannot find device "nvmf_init_if" 00:12:10.750 16:04:40 -- nvmf/common.sh@161 -- # true 00:12:10.750 16:04:40 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:10.750 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:10.750 16:04:40 -- nvmf/common.sh@162 -- # true 00:12:10.750 16:04:40 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:10.750 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:10.750 16:04:40 -- nvmf/common.sh@163 -- # true 00:12:10.750 16:04:40 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:11.008 16:04:40 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:11.008 16:04:40 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:11.008 16:04:40 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:11.008 16:04:40 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:11.008 16:04:40 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:11.008 16:04:40 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:11.008 16:04:40 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:11.008 16:04:40 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:11.008 16:04:40 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:11.008 16:04:40 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:11.008 16:04:40 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:11.008 16:04:40 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:11.008 16:04:40 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:11.008 16:04:40 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:11.008 16:04:40 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:11.008 16:04:40 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:11.008 16:04:40 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:11.008 16:04:40 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:11.008 16:04:40 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:11.267 16:04:40 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:11.267 16:04:40 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:11.267 16:04:41 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:11.267 16:04:41 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:11.267 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:11.267 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:12:11.267 00:12:11.267 --- 10.0.0.2 ping statistics --- 00:12:11.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.267 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:12:11.267 16:04:41 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:11.267 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:11.267 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:12:11.267 00:12:11.267 --- 10.0.0.3 ping statistics --- 00:12:11.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.267 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:12:11.267 16:04:41 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:11.267 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:11.267 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:12:11.267 00:12:11.267 --- 10.0.0.1 ping statistics --- 00:12:11.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.267 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:12:11.267 16:04:41 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:11.267 16:04:41 -- nvmf/common.sh@422 -- # return 0 00:12:11.267 16:04:41 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:11.267 16:04:41 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:11.267 16:04:41 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:11.267 16:04:41 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:11.267 16:04:41 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:11.267 16:04:41 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:11.267 16:04:41 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:11.267 16:04:41 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:12:11.267 16:04:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:11.267 16:04:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:11.267 16:04:41 -- common/autotest_common.sh@10 -- # set +x 00:12:11.267 ************************************ 00:12:11.267 START TEST nvmf_host_management 00:12:11.267 ************************************ 00:12:11.267 16:04:41 -- common/autotest_common.sh@1111 -- # nvmf_host_management 00:12:11.267 16:04:41 -- target/host_management.sh@69 -- # starttarget 00:12:11.267 16:04:41 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:12:11.267 16:04:41 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:11.267 16:04:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:11.267 16:04:41 -- common/autotest_common.sh@10 -- # set +x 00:12:11.267 16:04:41 -- nvmf/common.sh@470 -- # nvmfpid=77505 00:12:11.267 16:04:41 -- nvmf/common.sh@471 -- # waitforlisten 77505 00:12:11.267 16:04:41 -- common/autotest_common.sh@817 -- # '[' -z 77505 ']' 00:12:11.267 16:04:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.267 16:04:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:11.267 16:04:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.267 16:04:41 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:12:11.267 16:04:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:11.267 16:04:41 -- common/autotest_common.sh@10 -- # set +x 00:12:11.267 [2024-04-15 16:04:41.216406] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:12:11.267 [2024-04-15 16:04:41.216486] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:11.526 [2024-04-15 16:04:41.365956] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:11.526 [2024-04-15 16:04:41.424216] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:11.526 [2024-04-15 16:04:41.424507] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:11.526 [2024-04-15 16:04:41.424708] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:11.526 [2024-04-15 16:04:41.424932] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:11.526 [2024-04-15 16:04:41.424975] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:11.526 [2024-04-15 16:04:41.425102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:11.526 [2024-04-15 16:04:41.425630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:11.526 [2024-04-15 16:04:41.425745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:12:11.526 [2024-04-15 16:04:41.425749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:11.827 16:04:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:11.827 16:04:41 -- common/autotest_common.sh@850 -- # return 0 00:12:11.827 16:04:41 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:11.827 16:04:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:11.827 16:04:41 -- common/autotest_common.sh@10 -- # set +x 00:12:11.827 16:04:41 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:11.827 16:04:41 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:11.827 16:04:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:11.827 16:04:41 -- common/autotest_common.sh@10 -- # set +x 00:12:11.827 [2024-04-15 16:04:41.573831] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:11.827 16:04:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:11.827 16:04:41 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:12:11.827 16:04:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:11.827 16:04:41 -- common/autotest_common.sh@10 -- # set +x 00:12:11.827 16:04:41 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:12:11.827 16:04:41 -- target/host_management.sh@23 -- # cat 00:12:11.827 16:04:41 -- target/host_management.sh@30 -- # rpc_cmd 00:12:11.827 16:04:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:11.827 16:04:41 -- common/autotest_common.sh@10 -- # set +x 00:12:11.827 Malloc0 00:12:11.827 [2024-04-15 16:04:41.649049] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:11.827 16:04:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:11.827 16:04:41 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:12:11.827 16:04:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:11.827 16:04:41 -- common/autotest_common.sh@10 -- # set +x 00:12:11.827 16:04:41 -- target/host_management.sh@73 -- # perfpid=77557 00:12:11.827 16:04:41 -- target/host_management.sh@74 -- # waitforlisten 77557 /var/tmp/bdevperf.sock 00:12:11.827 16:04:41 -- common/autotest_common.sh@817 -- # '[' -z 77557 ']' 00:12:11.827 16:04:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:11.827 16:04:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:11.827 16:04:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:11.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:11.827 16:04:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:11.827 16:04:41 -- common/autotest_common.sh@10 -- # set +x 00:12:11.827 16:04:41 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:12:11.827 16:04:41 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:12:11.827 16:04:41 -- nvmf/common.sh@521 -- # config=() 00:12:11.827 16:04:41 -- nvmf/common.sh@521 -- # local subsystem config 00:12:11.827 16:04:41 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:12:11.827 16:04:41 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:12:11.827 { 00:12:11.827 "params": { 00:12:11.827 "name": "Nvme$subsystem", 00:12:11.827 "trtype": "$TEST_TRANSPORT", 00:12:11.827 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:11.827 "adrfam": "ipv4", 00:12:11.827 "trsvcid": "$NVMF_PORT", 00:12:11.827 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:11.827 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:11.827 "hdgst": ${hdgst:-false}, 00:12:11.827 "ddgst": ${ddgst:-false} 00:12:11.827 }, 00:12:11.827 "method": "bdev_nvme_attach_controller" 00:12:11.827 } 00:12:11.827 EOF 00:12:11.827 )") 00:12:11.827 16:04:41 -- nvmf/common.sh@543 -- # cat 00:12:11.827 16:04:41 -- nvmf/common.sh@545 -- # jq . 00:12:11.827 16:04:41 -- nvmf/common.sh@546 -- # IFS=, 00:12:11.827 16:04:41 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:12:11.827 "params": { 00:12:11.827 "name": "Nvme0", 00:12:11.827 "trtype": "tcp", 00:12:11.827 "traddr": "10.0.0.2", 00:12:11.827 "adrfam": "ipv4", 00:12:11.827 "trsvcid": "4420", 00:12:11.827 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:11.827 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:11.827 "hdgst": false, 00:12:11.827 "ddgst": false 00:12:11.827 }, 00:12:11.827 "method": "bdev_nvme_attach_controller" 00:12:11.827 }' 00:12:11.828 [2024-04-15 16:04:41.741503] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:12:11.828 [2024-04-15 16:04:41.741865] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77557 ] 00:12:12.085 [2024-04-15 16:04:41.884503] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.085 [2024-04-15 16:04:41.936405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.343 Running I/O for 10 seconds... 00:12:12.343 16:04:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:12.343 16:04:42 -- common/autotest_common.sh@850 -- # return 0 00:12:12.343 16:04:42 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:12:12.343 16:04:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:12.343 16:04:42 -- common/autotest_common.sh@10 -- # set +x 00:12:12.343 16:04:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:12.343 16:04:42 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:12.343 16:04:42 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:12:12.343 16:04:42 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:12:12.343 16:04:42 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:12:12.343 16:04:42 -- target/host_management.sh@52 -- # local ret=1 00:12:12.343 16:04:42 -- target/host_management.sh@53 -- # local i 00:12:12.343 16:04:42 -- target/host_management.sh@54 -- # (( i = 10 )) 00:12:12.343 16:04:42 -- target/host_management.sh@54 -- # (( i != 0 )) 00:12:12.343 16:04:42 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:12:12.343 16:04:42 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:12:12.343 16:04:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:12.343 16:04:42 -- common/autotest_common.sh@10 -- # set +x 00:12:12.343 16:04:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:12.343 16:04:42 -- target/host_management.sh@55 -- # read_io_count=67 00:12:12.343 16:04:42 -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:12:12.343 16:04:42 -- target/host_management.sh@62 -- # sleep 0.25 00:12:12.602 16:04:42 -- target/host_management.sh@54 -- # (( i-- )) 00:12:12.602 16:04:42 -- target/host_management.sh@54 -- # (( i != 0 )) 00:12:12.602 16:04:42 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:12:12.602 16:04:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:12.602 16:04:42 -- common/autotest_common.sh@10 -- # set +x 00:12:12.602 16:04:42 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:12:12.602 16:04:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:12.602 16:04:42 -- target/host_management.sh@55 -- # read_io_count=579 00:12:12.602 16:04:42 -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:12:12.602 16:04:42 -- target/host_management.sh@59 -- # ret=0 00:12:12.602 16:04:42 -- target/host_management.sh@60 -- # break 00:12:12.602 16:04:42 -- target/host_management.sh@64 -- # return 0 00:12:12.602 16:04:42 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:12.602 16:04:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:12.602 16:04:42 -- common/autotest_common.sh@10 -- # set +x 00:12:12.602 16:04:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:12.602 16:04:42 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:12.602 16:04:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:12.602 16:04:42 -- common/autotest_common.sh@10 -- # set +x 00:12:12.602 [2024-04-15 16:04:42.521878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.602 [2024-04-15 16:04:42.522062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.602 [2024-04-15 16:04:42.522203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.602 [2024-04-15 16:04:42.522303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.602 [2024-04-15 16:04:42.522367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.602 [2024-04-15 16:04:42.522475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.602 [2024-04-15 16:04:42.522558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.602 [2024-04-15 16:04:42.522628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.602 [2024-04-15 16:04:42.522720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.602 [2024-04-15 16:04:42.522779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.602 [2024-04-15 16:04:42.522850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.602 [2024-04-15 16:04:42.522916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.602 [2024-04-15 16:04:42.522971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.602 [2024-04-15 16:04:42.523082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.602 [2024-04-15 16:04:42.523156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.602 [2024-04-15 16:04:42.523221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.602 [2024-04-15 16:04:42.523278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.602 [2024-04-15 16:04:42.523389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.602 [2024-04-15 16:04:42.523470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.602 [2024-04-15 16:04:42.523534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.602 [2024-04-15 16:04:42.523599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.602 [2024-04-15 16:04:42.523708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.602 [2024-04-15 16:04:42.523789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.602 [2024-04-15 16:04:42.523854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.602 [2024-04-15 16:04:42.523909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.602 [2024-04-15 16:04:42.524017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.602 [2024-04-15 16:04:42.524100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.602 [2024-04-15 16:04:42.524164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.602 [2024-04-15 16:04:42.524223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.602 [2024-04-15 16:04:42.524328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.602 [2024-04-15 16:04:42.524437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.602 [2024-04-15 16:04:42.524496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.602 [2024-04-15 16:04:42.524551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.602 [2024-04-15 16:04:42.524690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.602 [2024-04-15 16:04:42.524750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.602 [2024-04-15 16:04:42.524855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.602 [2024-04-15 16:04:42.524914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.602 [2024-04-15 16:04:42.524989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.602 [2024-04-15 16:04:42.525044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.602 [2024-04-15 16:04:42.525096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.602 [2024-04-15 16:04:42.525177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.603 [2024-04-15 16:04:42.525236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.603 [2024-04-15 16:04:42.525374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.603 [2024-04-15 16:04:42.525463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.603 [2024-04-15 16:04:42.525563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.603 [2024-04-15 16:04:42.525686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.603 [2024-04-15 16:04:42.525795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.603 [2024-04-15 16:04:42.525857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.603 [2024-04-15 16:04:42.525960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.603 [2024-04-15 16:04:42.526045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.603 [2024-04-15 16:04:42.526150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.603 [2024-04-15 16:04:42.526221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.603 16:04:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:12.603 [2024-04-15 16:04:42.526277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.603 [2024-04-15 16:04:42.526293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.603 [2024-04-15 16:04:42.526305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.603 [2024-04-15 16:04:42.526315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.603 [2024-04-15 16:04:42.526327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.603 [2024-04-15 16:04:42.526338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.603 [2024-04-15 16:04:42.526350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.603 [2024-04-15 16:04:42.526360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.603 16:04:42 -- target/host_management.sh@87 -- # sleep 1 00:12:12.603 [2024-04-15 16:04:42.526373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.603 [2024-04-15 16:04:42.526383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.603 [2024-04-15 16:04:42.526395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.603 [2024-04-15 16:04:42.526406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.603 [2024-04-15 16:04:42.526418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.603 [2024-04-15 16:04:42.526428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.603 [2024-04-15 16:04:42.526439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.603 [2024-04-15 16:04:42.526450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.603 [2024-04-15 16:04:42.526462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.603 [2024-04-15 16:04:42.526472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.603 [2024-04-15 16:04:42.526484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.603 [2024-04-15 16:04:42.526494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.603 [2024-04-15 16:04:42.526506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.603 [2024-04-15 16:04:42.526516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.603 [2024-04-15 16:04:42.526529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.603 [2024-04-15 16:04:42.526538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.603 [2024-04-15 16:04:42.526550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.603 [2024-04-15 16:04:42.526562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.603 [2024-04-15 16:04:42.526585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.603 [2024-04-15 16:04:42.526596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.603 [2024-04-15 16:04:42.526608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.603 [2024-04-15 16:04:42.526618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.603 [2024-04-15 16:04:42.526630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.603 [2024-04-15 16:04:42.526640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.603 [2024-04-15 16:04:42.526652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.603 [2024-04-15 16:04:42.526663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.603 [2024-04-15 16:04:42.526675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.603 [2024-04-15 16:04:42.526685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.603 [2024-04-15 16:04:42.526698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.603 [2024-04-15 16:04:42.526713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.603 [2024-04-15 16:04:42.526737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.603 [2024-04-15 16:04:42.526748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.603 [2024-04-15 16:04:42.526759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.603 [2024-04-15 16:04:42.526770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.603 [2024-04-15 16:04:42.526786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.603 [2024-04-15 16:04:42.526803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.603 [2024-04-15 16:04:42.526818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.603 [2024-04-15 16:04:42.526834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.603 [2024-04-15 16:04:42.526856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.603 [2024-04-15 16:04:42.526870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.603 [2024-04-15 16:04:42.526882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.603 [2024-04-15 16:04:42.526901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.603 [2024-04-15 16:04:42.526926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.603 [2024-04-15 16:04:42.526946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.603 [2024-04-15 16:04:42.526974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.603 [2024-04-15 16:04:42.526994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.603 [2024-04-15 16:04:42.527006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.603 [2024-04-15 16:04:42.527019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.603 [2024-04-15 16:04:42.527034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.603 [2024-04-15 16:04:42.527048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.603 [2024-04-15 16:04:42.527066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.603 [2024-04-15 16:04:42.527089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.603 [2024-04-15 16:04:42.527117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.603 [2024-04-15 16:04:42.527140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.603 [2024-04-15 16:04:42.527167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.603 [2024-04-15 16:04:42.527191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.603 [2024-04-15 16:04:42.527216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.603 [2024-04-15 16:04:42.527239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.604 [2024-04-15 16:04:42.527267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.604 [2024-04-15 16:04:42.527286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.604 [2024-04-15 16:04:42.527299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.604 [2024-04-15 16:04:42.527313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.604 [2024-04-15 16:04:42.527364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.604 [2024-04-15 16:04:42.527377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.604 [2024-04-15 16:04:42.527399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.604 [2024-04-15 16:04:42.527416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.604 [2024-04-15 16:04:42.527434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:12.604 [2024-04-15 16:04:42.527448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.604 [2024-04-15 16:04:42.527460] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc277a0 is same with the state(5) to be set 00:12:12.604 [2024-04-15 16:04:42.527542] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc277a0 was disconnected and freed. reset controller. 00:12:12.604 [2024-04-15 16:04:42.527676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:12:12.604 [2024-04-15 16:04:42.527704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.604 [2024-04-15 16:04:42.527720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:12:12.604 [2024-04-15 16:04:42.527733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.604 [2024-04-15 16:04:42.527748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:12:12.604 [2024-04-15 16:04:42.527761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.604 [2024-04-15 16:04:42.527772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:12:12.604 [2024-04-15 16:04:42.527782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:12.604 [2024-04-15 16:04:42.527792] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc249a0 is same with the state(5) to be set 00:12:12.604 [2024-04-15 16:04:42.528909] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:12:12.604 task offset: 81920 on job bdev=Nvme0n1 fails 00:12:12.604 00:12:12.604 Latency(us) 00:12:12.604 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:12.604 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:12.604 Job: Nvme0n1 ended in about 0.42 seconds with error 00:12:12.604 Verification LBA range: start 0x0 length 0x400 00:12:12.604 Nvme0n1 : 0.42 1505.90 94.12 150.59 0.00 37128.64 6335.15 41943.04 00:12:12.604 =================================================================================================================== 00:12:12.604 Total : 1505.90 94.12 150.59 0.00 37128.64 6335.15 41943.04 00:12:12.604 [2024-04-15 16:04:42.531492] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:12.604 [2024-04-15 16:04:42.531608] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc249a0 (9): Bad file descriptor 00:12:12.604 [2024-04-15 16:04:42.540828] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:13.977 16:04:43 -- target/host_management.sh@91 -- # kill -9 77557 00:12:13.977 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (77557) - No such process 00:12:13.977 16:04:43 -- target/host_management.sh@91 -- # true 00:12:13.977 16:04:43 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:12:13.977 16:04:43 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:12:13.977 16:04:43 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:12:13.977 16:04:43 -- nvmf/common.sh@521 -- # config=() 00:12:13.977 16:04:43 -- nvmf/common.sh@521 -- # local subsystem config 00:12:13.977 16:04:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:12:13.977 16:04:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:12:13.977 { 00:12:13.977 "params": { 00:12:13.977 "name": "Nvme$subsystem", 00:12:13.977 "trtype": "$TEST_TRANSPORT", 00:12:13.977 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:13.977 "adrfam": "ipv4", 00:12:13.977 "trsvcid": "$NVMF_PORT", 00:12:13.977 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:13.977 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:13.977 "hdgst": ${hdgst:-false}, 00:12:13.977 "ddgst": ${ddgst:-false} 00:12:13.977 }, 00:12:13.977 "method": "bdev_nvme_attach_controller" 00:12:13.977 } 00:12:13.977 EOF 00:12:13.977 )") 00:12:13.977 16:04:43 -- nvmf/common.sh@543 -- # cat 00:12:13.977 16:04:43 -- nvmf/common.sh@545 -- # jq . 00:12:13.977 16:04:43 -- nvmf/common.sh@546 -- # IFS=, 00:12:13.977 16:04:43 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:12:13.977 "params": { 00:12:13.977 "name": "Nvme0", 00:12:13.977 "trtype": "tcp", 00:12:13.977 "traddr": "10.0.0.2", 00:12:13.977 "adrfam": "ipv4", 00:12:13.977 "trsvcid": "4420", 00:12:13.977 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:13.977 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:13.977 "hdgst": false, 00:12:13.977 "ddgst": false 00:12:13.977 }, 00:12:13.977 "method": "bdev_nvme_attach_controller" 00:12:13.977 }' 00:12:13.977 [2024-04-15 16:04:43.586091] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:12:13.977 [2024-04-15 16:04:43.586367] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77596 ] 00:12:13.977 [2024-04-15 16:04:43.728727] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:13.977 [2024-04-15 16:04:43.790302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.234 Running I/O for 1 seconds... 00:12:15.167 00:12:15.167 Latency(us) 00:12:15.167 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:15.167 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:15.167 Verification LBA range: start 0x0 length 0x400 00:12:15.167 Nvme0n1 : 1.04 1727.78 107.99 0.00 0.00 36386.66 4025.78 33704.23 00:12:15.167 =================================================================================================================== 00:12:15.167 Total : 1727.78 107.99 0.00 0.00 36386.66 4025.78 33704.23 00:12:15.424 16:04:45 -- target/host_management.sh@101 -- # stoptarget 00:12:15.424 16:04:45 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:12:15.424 16:04:45 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:12:15.424 16:04:45 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:12:15.424 16:04:45 -- target/host_management.sh@40 -- # nvmftestfini 00:12:15.424 16:04:45 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:15.424 16:04:45 -- nvmf/common.sh@117 -- # sync 00:12:15.424 16:04:45 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:15.424 16:04:45 -- nvmf/common.sh@120 -- # set +e 00:12:15.424 16:04:45 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:15.424 16:04:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:15.424 rmmod nvme_tcp 00:12:15.424 rmmod nvme_fabrics 00:12:15.424 16:04:45 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:15.424 16:04:45 -- nvmf/common.sh@124 -- # set -e 00:12:15.424 16:04:45 -- nvmf/common.sh@125 -- # return 0 00:12:15.424 16:04:45 -- nvmf/common.sh@478 -- # '[' -n 77505 ']' 00:12:15.424 16:04:45 -- nvmf/common.sh@479 -- # killprocess 77505 00:12:15.424 16:04:45 -- common/autotest_common.sh@936 -- # '[' -z 77505 ']' 00:12:15.424 16:04:45 -- common/autotest_common.sh@940 -- # kill -0 77505 00:12:15.424 16:04:45 -- common/autotest_common.sh@941 -- # uname 00:12:15.424 16:04:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:15.424 16:04:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77505 00:12:15.424 16:04:45 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:15.424 16:04:45 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:15.425 16:04:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77505' 00:12:15.425 killing process with pid 77505 00:12:15.425 16:04:45 -- common/autotest_common.sh@955 -- # kill 77505 00:12:15.425 16:04:45 -- common/autotest_common.sh@960 -- # wait 77505 00:12:15.683 [2024-04-15 16:04:45.509624] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:12:15.683 16:04:45 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:15.683 16:04:45 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:15.683 16:04:45 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:15.683 16:04:45 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:15.683 16:04:45 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:15.683 16:04:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:15.683 16:04:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:15.683 16:04:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:15.683 16:04:45 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:15.683 00:12:15.683 real 0m4.414s 00:12:15.683 user 0m18.181s 00:12:15.683 sys 0m1.250s 00:12:15.683 16:04:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:15.683 16:04:45 -- common/autotest_common.sh@10 -- # set +x 00:12:15.683 ************************************ 00:12:15.683 END TEST nvmf_host_management 00:12:15.683 ************************************ 00:12:15.683 16:04:45 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:12:15.683 00:12:15.683 real 0m5.175s 00:12:15.683 user 0m18.359s 00:12:15.683 sys 0m1.576s 00:12:15.683 16:04:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:15.683 16:04:45 -- common/autotest_common.sh@10 -- # set +x 00:12:15.683 ************************************ 00:12:15.683 END TEST nvmf_host_management 00:12:15.683 ************************************ 00:12:15.941 16:04:45 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:12:15.941 16:04:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:15.941 16:04:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:15.941 16:04:45 -- common/autotest_common.sh@10 -- # set +x 00:12:15.941 ************************************ 00:12:15.941 START TEST nvmf_lvol 00:12:15.941 ************************************ 00:12:15.941 16:04:45 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:12:15.941 * Looking for test storage... 00:12:15.941 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:15.941 16:04:45 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:15.941 16:04:45 -- nvmf/common.sh@7 -- # uname -s 00:12:15.941 16:04:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:15.941 16:04:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:15.941 16:04:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:15.941 16:04:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:15.941 16:04:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:15.941 16:04:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:15.941 16:04:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:15.941 16:04:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:15.941 16:04:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:15.941 16:04:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:15.941 16:04:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5cf30adc-e0ee-4255-9853-178d7d30983a 00:12:15.941 16:04:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=5cf30adc-e0ee-4255-9853-178d7d30983a 00:12:15.941 16:04:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:15.941 16:04:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:15.941 16:04:45 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:15.941 16:04:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:15.941 16:04:45 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:15.941 16:04:45 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:15.941 16:04:45 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:15.941 16:04:45 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:15.941 16:04:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.941 16:04:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.941 16:04:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.941 16:04:45 -- paths/export.sh@5 -- # export PATH 00:12:15.942 16:04:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.942 16:04:45 -- nvmf/common.sh@47 -- # : 0 00:12:15.942 16:04:45 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:15.942 16:04:45 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:15.942 16:04:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:15.942 16:04:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:15.942 16:04:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:15.942 16:04:45 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:15.942 16:04:45 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:15.942 16:04:45 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:15.942 16:04:45 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:15.942 16:04:45 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:15.942 16:04:45 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:12:15.942 16:04:45 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:12:15.942 16:04:45 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:15.942 16:04:45 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:12:15.942 16:04:45 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:15.942 16:04:45 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:15.942 16:04:45 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:15.942 16:04:45 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:15.942 16:04:45 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:15.942 16:04:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:15.942 16:04:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:15.942 16:04:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:15.942 16:04:45 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:12:15.942 16:04:45 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:12:15.942 16:04:45 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:12:15.942 16:04:45 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:12:15.942 16:04:45 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:12:15.942 16:04:45 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:12:15.942 16:04:45 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:15.942 16:04:45 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:15.942 16:04:45 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:15.942 16:04:45 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:15.942 16:04:45 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:15.942 16:04:45 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:15.942 16:04:45 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:15.942 16:04:45 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:15.942 16:04:45 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:15.942 16:04:45 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:15.942 16:04:45 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:15.942 16:04:45 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:15.942 16:04:45 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:15.942 16:04:45 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:16.201 Cannot find device "nvmf_tgt_br" 00:12:16.201 16:04:45 -- nvmf/common.sh@155 -- # true 00:12:16.201 16:04:45 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:16.201 Cannot find device "nvmf_tgt_br2" 00:12:16.201 16:04:45 -- nvmf/common.sh@156 -- # true 00:12:16.201 16:04:45 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:16.201 16:04:45 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:16.201 Cannot find device "nvmf_tgt_br" 00:12:16.201 16:04:45 -- nvmf/common.sh@158 -- # true 00:12:16.201 16:04:45 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:16.201 Cannot find device "nvmf_tgt_br2" 00:12:16.201 16:04:45 -- nvmf/common.sh@159 -- # true 00:12:16.201 16:04:45 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:16.201 16:04:46 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:16.201 16:04:46 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:16.201 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:16.201 16:04:46 -- nvmf/common.sh@162 -- # true 00:12:16.201 16:04:46 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:16.201 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:16.201 16:04:46 -- nvmf/common.sh@163 -- # true 00:12:16.201 16:04:46 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:16.201 16:04:46 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:16.201 16:04:46 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:16.201 16:04:46 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:16.201 16:04:46 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:16.201 16:04:46 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:16.201 16:04:46 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:16.201 16:04:46 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:16.201 16:04:46 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:16.201 16:04:46 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:16.201 16:04:46 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:16.201 16:04:46 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:16.201 16:04:46 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:16.201 16:04:46 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:16.201 16:04:46 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:16.201 16:04:46 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:16.201 16:04:46 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:16.201 16:04:46 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:16.201 16:04:46 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:16.459 16:04:46 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:16.459 16:04:46 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:16.459 16:04:46 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:16.459 16:04:46 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:16.459 16:04:46 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:16.459 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:16.459 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:12:16.459 00:12:16.459 --- 10.0.0.2 ping statistics --- 00:12:16.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.459 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:12:16.459 16:04:46 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:16.459 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:16.459 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:12:16.459 00:12:16.459 --- 10.0.0.3 ping statistics --- 00:12:16.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.459 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:12:16.459 16:04:46 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:16.459 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:16.459 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:12:16.459 00:12:16.459 --- 10.0.0.1 ping statistics --- 00:12:16.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.459 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:12:16.459 16:04:46 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:16.459 16:04:46 -- nvmf/common.sh@422 -- # return 0 00:12:16.459 16:04:46 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:16.459 16:04:46 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:16.459 16:04:46 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:16.459 16:04:46 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:16.459 16:04:46 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:16.459 16:04:46 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:16.459 16:04:46 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:16.459 16:04:46 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:12:16.459 16:04:46 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:16.459 16:04:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:16.459 16:04:46 -- common/autotest_common.sh@10 -- # set +x 00:12:16.459 16:04:46 -- nvmf/common.sh@470 -- # nvmfpid=77822 00:12:16.459 16:04:46 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:16.460 16:04:46 -- nvmf/common.sh@471 -- # waitforlisten 77822 00:12:16.460 16:04:46 -- common/autotest_common.sh@817 -- # '[' -z 77822 ']' 00:12:16.460 16:04:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:16.460 16:04:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:16.460 16:04:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:16.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:16.460 16:04:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:16.460 16:04:46 -- common/autotest_common.sh@10 -- # set +x 00:12:16.460 [2024-04-15 16:04:46.293236] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:12:16.460 [2024-04-15 16:04:46.293508] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:16.718 [2024-04-15 16:04:46.436727] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:16.718 [2024-04-15 16:04:46.491089] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:16.718 [2024-04-15 16:04:46.491341] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:16.718 [2024-04-15 16:04:46.491534] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:16.718 [2024-04-15 16:04:46.491694] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:16.718 [2024-04-15 16:04:46.491745] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:16.718 [2024-04-15 16:04:46.491961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:16.718 [2024-04-15 16:04:46.492033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:16.718 [2024-04-15 16:04:46.492038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.718 16:04:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:16.718 16:04:46 -- common/autotest_common.sh@850 -- # return 0 00:12:16.718 16:04:46 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:16.718 16:04:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:16.718 16:04:46 -- common/autotest_common.sh@10 -- # set +x 00:12:16.718 16:04:46 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:16.718 16:04:46 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:16.976 [2024-04-15 16:04:46.908754] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:16.976 16:04:46 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:17.542 16:04:47 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:12:17.543 16:04:47 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:17.802 16:04:47 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:12:17.802 16:04:47 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:12:18.059 16:04:47 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:12:18.318 16:04:48 -- target/nvmf_lvol.sh@29 -- # lvs=a38ae3d9-e8ef-480a-92a5-8920260d170d 00:12:18.318 16:04:48 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u a38ae3d9-e8ef-480a-92a5-8920260d170d lvol 20 00:12:18.575 16:04:48 -- target/nvmf_lvol.sh@32 -- # lvol=bf4d6a5a-1353-472d-818d-56f5b90fa841 00:12:18.575 16:04:48 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:19.140 16:04:48 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bf4d6a5a-1353-472d-818d-56f5b90fa841 00:12:19.140 16:04:49 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:19.706 [2024-04-15 16:04:49.389158] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:19.706 16:04:49 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:19.964 16:04:49 -- target/nvmf_lvol.sh@42 -- # perf_pid=77896 00:12:19.964 16:04:49 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:12:19.964 16:04:49 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:12:20.897 16:04:50 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot bf4d6a5a-1353-472d-818d-56f5b90fa841 MY_SNAPSHOT 00:12:21.154 16:04:51 -- target/nvmf_lvol.sh@47 -- # snapshot=4969c5a9-7d3c-497e-8796-4daee19cca9d 00:12:21.154 16:04:51 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize bf4d6a5a-1353-472d-818d-56f5b90fa841 30 00:12:21.411 16:04:51 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 4969c5a9-7d3c-497e-8796-4daee19cca9d MY_CLONE 00:12:21.668 16:04:51 -- target/nvmf_lvol.sh@49 -- # clone=db65bb61-ff52-4f54-8b20-614a08dc48fc 00:12:21.668 16:04:51 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate db65bb61-ff52-4f54-8b20-614a08dc48fc 00:12:21.966 16:04:51 -- target/nvmf_lvol.sh@53 -- # wait 77896 00:12:30.131 Initializing NVMe Controllers 00:12:30.131 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:30.132 Controller IO queue size 128, less than required. 00:12:30.132 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:30.132 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:12:30.132 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:12:30.132 Initialization complete. Launching workers. 00:12:30.132 ======================================================== 00:12:30.132 Latency(us) 00:12:30.132 Device Information : IOPS MiB/s Average min max 00:12:30.132 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9859.30 38.51 12994.29 2120.75 64640.99 00:12:30.132 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9897.80 38.66 12946.40 997.58 51794.04 00:12:30.132 ======================================================== 00:12:30.132 Total : 19757.10 77.18 12970.30 997.58 64640.99 00:12:30.132 00:12:30.132 16:05:00 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:30.390 16:05:00 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete bf4d6a5a-1353-472d-818d-56f5b90fa841 00:12:30.648 16:05:00 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a38ae3d9-e8ef-480a-92a5-8920260d170d 00:12:31.215 16:05:00 -- target/nvmf_lvol.sh@60 -- # rm -f 00:12:31.215 16:05:00 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:12:31.215 16:05:00 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:12:31.215 16:05:00 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:31.215 16:05:00 -- nvmf/common.sh@117 -- # sync 00:12:31.215 16:05:00 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:31.215 16:05:00 -- nvmf/common.sh@120 -- # set +e 00:12:31.215 16:05:00 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:31.215 16:05:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:31.215 rmmod nvme_tcp 00:12:31.215 rmmod nvme_fabrics 00:12:31.215 16:05:00 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:31.215 16:05:00 -- nvmf/common.sh@124 -- # set -e 00:12:31.215 16:05:00 -- nvmf/common.sh@125 -- # return 0 00:12:31.215 16:05:00 -- nvmf/common.sh@478 -- # '[' -n 77822 ']' 00:12:31.215 16:05:00 -- nvmf/common.sh@479 -- # killprocess 77822 00:12:31.215 16:05:00 -- common/autotest_common.sh@936 -- # '[' -z 77822 ']' 00:12:31.215 16:05:00 -- common/autotest_common.sh@940 -- # kill -0 77822 00:12:31.215 16:05:00 -- common/autotest_common.sh@941 -- # uname 00:12:31.215 16:05:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:31.215 16:05:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77822 00:12:31.215 16:05:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:31.215 killing process with pid 77822 00:12:31.215 16:05:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:31.215 16:05:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77822' 00:12:31.215 16:05:01 -- common/autotest_common.sh@955 -- # kill 77822 00:12:31.215 16:05:01 -- common/autotest_common.sh@960 -- # wait 77822 00:12:31.472 16:05:01 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:31.472 16:05:01 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:31.472 16:05:01 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:31.472 16:05:01 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:31.472 16:05:01 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:31.472 16:05:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.472 16:05:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:31.472 16:05:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.473 16:05:01 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:31.473 ************************************ 00:12:31.473 END TEST nvmf_lvol 00:12:31.473 ************************************ 00:12:31.473 00:12:31.473 real 0m15.549s 00:12:31.473 user 1m3.342s 00:12:31.473 sys 0m5.940s 00:12:31.473 16:05:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:31.473 16:05:01 -- common/autotest_common.sh@10 -- # set +x 00:12:31.473 16:05:01 -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:12:31.473 16:05:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:31.473 16:05:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:31.473 16:05:01 -- common/autotest_common.sh@10 -- # set +x 00:12:31.473 ************************************ 00:12:31.473 START TEST nvmf_lvs_grow 00:12:31.473 ************************************ 00:12:31.473 16:05:01 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:12:31.731 * Looking for test storage... 00:12:31.731 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:31.731 16:05:01 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:31.731 16:05:01 -- nvmf/common.sh@7 -- # uname -s 00:12:31.731 16:05:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:31.731 16:05:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:31.731 16:05:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:31.731 16:05:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:31.731 16:05:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:31.731 16:05:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:31.731 16:05:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:31.731 16:05:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:31.731 16:05:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:31.731 16:05:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:31.731 16:05:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5cf30adc-e0ee-4255-9853-178d7d30983a 00:12:31.731 16:05:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=5cf30adc-e0ee-4255-9853-178d7d30983a 00:12:31.731 16:05:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:31.731 16:05:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:31.731 16:05:01 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:31.731 16:05:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:31.731 16:05:01 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:31.731 16:05:01 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:31.731 16:05:01 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:31.731 16:05:01 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:31.731 16:05:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.731 16:05:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.731 16:05:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.731 16:05:01 -- paths/export.sh@5 -- # export PATH 00:12:31.731 16:05:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.731 16:05:01 -- nvmf/common.sh@47 -- # : 0 00:12:31.731 16:05:01 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:31.731 16:05:01 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:31.731 16:05:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:31.731 16:05:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:31.731 16:05:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:31.731 16:05:01 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:31.731 16:05:01 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:31.731 16:05:01 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:31.731 16:05:01 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:31.731 16:05:01 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:31.731 16:05:01 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:12:31.731 16:05:01 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:31.731 16:05:01 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:31.731 16:05:01 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:31.731 16:05:01 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:31.731 16:05:01 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:31.731 16:05:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.731 16:05:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:31.731 16:05:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.731 16:05:01 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:12:31.731 16:05:01 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:12:31.731 16:05:01 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:12:31.731 16:05:01 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:12:31.731 16:05:01 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:12:31.731 16:05:01 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:12:31.731 16:05:01 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:31.731 16:05:01 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:31.731 16:05:01 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:31.731 16:05:01 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:31.731 16:05:01 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:31.731 16:05:01 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:31.731 16:05:01 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:31.731 16:05:01 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:31.731 16:05:01 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:31.731 16:05:01 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:31.731 16:05:01 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:31.731 16:05:01 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:31.731 16:05:01 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:31.731 16:05:01 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:31.731 Cannot find device "nvmf_tgt_br" 00:12:31.731 16:05:01 -- nvmf/common.sh@155 -- # true 00:12:31.731 16:05:01 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:31.731 Cannot find device "nvmf_tgt_br2" 00:12:31.731 16:05:01 -- nvmf/common.sh@156 -- # true 00:12:31.731 16:05:01 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:31.731 16:05:01 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:31.731 Cannot find device "nvmf_tgt_br" 00:12:31.731 16:05:01 -- nvmf/common.sh@158 -- # true 00:12:31.731 16:05:01 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:31.731 Cannot find device "nvmf_tgt_br2" 00:12:31.731 16:05:01 -- nvmf/common.sh@159 -- # true 00:12:31.731 16:05:01 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:31.989 16:05:01 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:31.989 16:05:01 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:31.989 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:31.989 16:05:01 -- nvmf/common.sh@162 -- # true 00:12:31.989 16:05:01 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:31.989 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:31.989 16:05:01 -- nvmf/common.sh@163 -- # true 00:12:31.989 16:05:01 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:31.989 16:05:01 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:31.989 16:05:01 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:31.989 16:05:01 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:31.989 16:05:01 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:31.989 16:05:01 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:31.989 16:05:01 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:31.989 16:05:01 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:31.989 16:05:01 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:31.989 16:05:01 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:31.989 16:05:01 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:31.989 16:05:01 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:31.989 16:05:01 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:31.989 16:05:01 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:31.989 16:05:01 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:31.989 16:05:01 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:31.989 16:05:01 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:31.989 16:05:01 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:31.989 16:05:01 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:31.989 16:05:01 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:31.989 16:05:01 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:32.247 16:05:01 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:32.247 16:05:01 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:32.247 16:05:01 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:32.247 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:32.247 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:12:32.247 00:12:32.247 --- 10.0.0.2 ping statistics --- 00:12:32.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.247 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:12:32.247 16:05:01 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:32.247 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:32.247 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:12:32.247 00:12:32.247 --- 10.0.0.3 ping statistics --- 00:12:32.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.247 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:12:32.247 16:05:01 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:32.247 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:32.247 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:12:32.247 00:12:32.247 --- 10.0.0.1 ping statistics --- 00:12:32.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.247 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:12:32.247 16:05:01 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:32.247 16:05:01 -- nvmf/common.sh@422 -- # return 0 00:12:32.247 16:05:01 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:32.247 16:05:01 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:32.247 16:05:01 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:32.247 16:05:01 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:32.247 16:05:01 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:32.247 16:05:01 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:32.247 16:05:01 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:32.247 16:05:02 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:12:32.247 16:05:02 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:32.247 16:05:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:32.247 16:05:02 -- common/autotest_common.sh@10 -- # set +x 00:12:32.247 16:05:02 -- nvmf/common.sh@470 -- # nvmfpid=78228 00:12:32.247 16:05:02 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:32.247 16:05:02 -- nvmf/common.sh@471 -- # waitforlisten 78228 00:12:32.247 16:05:02 -- common/autotest_common.sh@817 -- # '[' -z 78228 ']' 00:12:32.248 16:05:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.248 16:05:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:32.248 16:05:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.248 16:05:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:32.248 16:05:02 -- common/autotest_common.sh@10 -- # set +x 00:12:32.248 [2024-04-15 16:05:02.101723] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:12:32.248 [2024-04-15 16:05:02.102126] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:32.509 [2024-04-15 16:05:02.262057] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.509 [2024-04-15 16:05:02.346086] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:32.509 [2024-04-15 16:05:02.346409] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:32.509 [2024-04-15 16:05:02.346646] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:32.509 [2024-04-15 16:05:02.346957] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:32.509 [2024-04-15 16:05:02.347134] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:32.509 [2024-04-15 16:05:02.347313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.474 16:05:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:33.474 16:05:03 -- common/autotest_common.sh@850 -- # return 0 00:12:33.474 16:05:03 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:33.474 16:05:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:33.474 16:05:03 -- common/autotest_common.sh@10 -- # set +x 00:12:33.474 16:05:03 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:33.474 16:05:03 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:33.474 [2024-04-15 16:05:03.375405] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:33.474 16:05:03 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:12:33.474 16:05:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:33.474 16:05:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:33.474 16:05:03 -- common/autotest_common.sh@10 -- # set +x 00:12:33.732 ************************************ 00:12:33.732 START TEST lvs_grow_clean 00:12:33.732 ************************************ 00:12:33.732 16:05:03 -- common/autotest_common.sh@1111 -- # lvs_grow 00:12:33.732 16:05:03 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:33.732 16:05:03 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:33.732 16:05:03 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:33.732 16:05:03 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:33.732 16:05:03 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:33.732 16:05:03 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:33.732 16:05:03 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:33.732 16:05:03 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:33.732 16:05:03 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:33.990 16:05:03 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:33.990 16:05:03 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:34.249 16:05:04 -- target/nvmf_lvs_grow.sh@28 -- # lvs=77d9f5ac-f679-4635-aea6-cd672717a00a 00:12:34.249 16:05:04 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77d9f5ac-f679-4635-aea6-cd672717a00a 00:12:34.249 16:05:04 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:34.507 16:05:04 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:34.507 16:05:04 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:34.507 16:05:04 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 77d9f5ac-f679-4635-aea6-cd672717a00a lvol 150 00:12:34.765 16:05:04 -- target/nvmf_lvs_grow.sh@33 -- # lvol=81ec77bd-e371-40d7-bee8-7b95d78560c9 00:12:34.765 16:05:04 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:34.765 16:05:04 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:35.021 [2024-04-15 16:05:04.762357] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:35.022 [2024-04-15 16:05:04.762679] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:35.022 true 00:12:35.022 16:05:04 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:35.022 16:05:04 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77d9f5ac-f679-4635-aea6-cd672717a00a 00:12:35.278 16:05:05 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:35.279 16:05:05 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:35.537 16:05:05 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 81ec77bd-e371-40d7-bee8-7b95d78560c9 00:12:35.795 16:05:05 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:35.795 [2024-04-15 16:05:05.690874] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:35.795 16:05:05 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:36.053 16:05:05 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=78310 00:12:36.053 16:05:05 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:36.053 16:05:05 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:36.053 16:05:05 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 78310 /var/tmp/bdevperf.sock 00:12:36.053 16:05:05 -- common/autotest_common.sh@817 -- # '[' -z 78310 ']' 00:12:36.053 16:05:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:36.053 16:05:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:36.053 16:05:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:36.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:36.053 16:05:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:36.053 16:05:05 -- common/autotest_common.sh@10 -- # set +x 00:12:36.338 [2024-04-15 16:05:06.019920] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:12:36.339 [2024-04-15 16:05:06.020306] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78310 ] 00:12:36.339 [2024-04-15 16:05:06.166130] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:36.339 [2024-04-15 16:05:06.225068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:37.274 16:05:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:37.274 16:05:06 -- common/autotest_common.sh@850 -- # return 0 00:12:37.274 16:05:06 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:37.274 Nvme0n1 00:12:37.274 16:05:07 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:37.531 [ 00:12:37.531 { 00:12:37.531 "name": "Nvme0n1", 00:12:37.531 "aliases": [ 00:12:37.531 "81ec77bd-e371-40d7-bee8-7b95d78560c9" 00:12:37.531 ], 00:12:37.531 "product_name": "NVMe disk", 00:12:37.531 "block_size": 4096, 00:12:37.531 "num_blocks": 38912, 00:12:37.531 "uuid": "81ec77bd-e371-40d7-bee8-7b95d78560c9", 00:12:37.531 "assigned_rate_limits": { 00:12:37.531 "rw_ios_per_sec": 0, 00:12:37.531 "rw_mbytes_per_sec": 0, 00:12:37.531 "r_mbytes_per_sec": 0, 00:12:37.531 "w_mbytes_per_sec": 0 00:12:37.531 }, 00:12:37.531 "claimed": false, 00:12:37.531 "zoned": false, 00:12:37.531 "supported_io_types": { 00:12:37.531 "read": true, 00:12:37.531 "write": true, 00:12:37.531 "unmap": true, 00:12:37.531 "write_zeroes": true, 00:12:37.531 "flush": true, 00:12:37.531 "reset": true, 00:12:37.531 "compare": true, 00:12:37.531 "compare_and_write": true, 00:12:37.531 "abort": true, 00:12:37.531 "nvme_admin": true, 00:12:37.531 "nvme_io": true 00:12:37.531 }, 00:12:37.531 "memory_domains": [ 00:12:37.531 { 00:12:37.531 "dma_device_id": "system", 00:12:37.531 "dma_device_type": 1 00:12:37.531 } 00:12:37.531 ], 00:12:37.531 "driver_specific": { 00:12:37.531 "nvme": [ 00:12:37.531 { 00:12:37.531 "trid": { 00:12:37.531 "trtype": "TCP", 00:12:37.531 "adrfam": "IPv4", 00:12:37.531 "traddr": "10.0.0.2", 00:12:37.531 "trsvcid": "4420", 00:12:37.531 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:37.531 }, 00:12:37.531 "ctrlr_data": { 00:12:37.531 "cntlid": 1, 00:12:37.531 "vendor_id": "0x8086", 00:12:37.531 "model_number": "SPDK bdev Controller", 00:12:37.531 "serial_number": "SPDK0", 00:12:37.531 "firmware_revision": "24.05", 00:12:37.531 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:37.531 "oacs": { 00:12:37.531 "security": 0, 00:12:37.531 "format": 0, 00:12:37.531 "firmware": 0, 00:12:37.531 "ns_manage": 0 00:12:37.531 }, 00:12:37.531 "multi_ctrlr": true, 00:12:37.531 "ana_reporting": false 00:12:37.531 }, 00:12:37.531 "vs": { 00:12:37.531 "nvme_version": "1.3" 00:12:37.531 }, 00:12:37.531 "ns_data": { 00:12:37.531 "id": 1, 00:12:37.531 "can_share": true 00:12:37.531 } 00:12:37.531 } 00:12:37.531 ], 00:12:37.531 "mp_policy": "active_passive" 00:12:37.531 } 00:12:37.531 } 00:12:37.531 ] 00:12:37.531 16:05:07 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=78338 00:12:37.531 16:05:07 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:37.531 16:05:07 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:37.531 Running I/O for 10 seconds... 00:12:38.907 Latency(us) 00:12:38.907 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:38.907 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:38.907 Nvme0n1 : 1.00 10414.00 40.68 0.00 0.00 0.00 0.00 0.00 00:12:38.907 =================================================================================================================== 00:12:38.907 Total : 10414.00 40.68 0.00 0.00 0.00 0.00 0.00 00:12:38.907 00:12:39.473 16:05:09 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 77d9f5ac-f679-4635-aea6-cd672717a00a 00:12:39.731 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:39.731 Nvme0n1 : 2.00 10350.50 40.43 0.00 0.00 0.00 0.00 0.00 00:12:39.731 =================================================================================================================== 00:12:39.731 Total : 10350.50 40.43 0.00 0.00 0.00 0.00 0.00 00:12:39.731 00:12:39.731 true 00:12:39.988 16:05:09 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77d9f5ac-f679-4635-aea6-cd672717a00a 00:12:39.988 16:05:09 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:40.246 16:05:10 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:40.246 16:05:10 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:40.246 16:05:10 -- target/nvmf_lvs_grow.sh@65 -- # wait 78338 00:12:40.815 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:40.815 Nvme0n1 : 3.00 10456.33 40.85 0.00 0.00 0.00 0.00 0.00 00:12:40.815 =================================================================================================================== 00:12:40.815 Total : 10456.33 40.85 0.00 0.00 0.00 0.00 0.00 00:12:40.815 00:12:41.745 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:41.745 Nvme0n1 : 4.00 10541.00 41.18 0.00 0.00 0.00 0.00 0.00 00:12:41.745 =================================================================================================================== 00:12:41.745 Total : 10541.00 41.18 0.00 0.00 0.00 0.00 0.00 00:12:41.745 00:12:42.681 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:42.681 Nvme0n1 : 5.00 10464.80 40.88 0.00 0.00 0.00 0.00 0.00 00:12:42.681 =================================================================================================================== 00:12:42.681 Total : 10464.80 40.88 0.00 0.00 0.00 0.00 0.00 00:12:42.681 00:12:43.613 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:43.613 Nvme0n1 : 6.00 10541.00 41.18 0.00 0.00 0.00 0.00 0.00 00:12:43.613 =================================================================================================================== 00:12:43.613 Total : 10541.00 41.18 0.00 0.00 0.00 0.00 0.00 00:12:43.613 00:12:44.546 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:44.546 Nvme0n1 : 7.00 10363.14 40.48 0.00 0.00 0.00 0.00 0.00 00:12:44.546 =================================================================================================================== 00:12:44.546 Total : 10363.14 40.48 0.00 0.00 0.00 0.00 0.00 00:12:44.546 00:12:45.921 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:45.921 Nvme0n1 : 8.00 10433.00 40.75 0.00 0.00 0.00 0.00 0.00 00:12:45.921 =================================================================================================================== 00:12:45.921 Total : 10433.00 40.75 0.00 0.00 0.00 0.00 0.00 00:12:45.921 00:12:46.856 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:46.856 Nvme0n1 : 9.00 10473.22 40.91 0.00 0.00 0.00 0.00 0.00 00:12:46.856 =================================================================================================================== 00:12:46.856 Total : 10473.22 40.91 0.00 0.00 0.00 0.00 0.00 00:12:46.856 00:12:47.789 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:47.789 Nvme0n1 : 10.00 10441.20 40.79 0.00 0.00 0.00 0.00 0.00 00:12:47.789 =================================================================================================================== 00:12:47.789 Total : 10441.20 40.79 0.00 0.00 0.00 0.00 0.00 00:12:47.789 00:12:47.789 00:12:47.789 Latency(us) 00:12:47.789 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:47.789 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:47.789 Nvme0n1 : 10.00 10438.49 40.78 0.00 0.00 12256.75 4774.77 182751.82 00:12:47.789 =================================================================================================================== 00:12:47.789 Total : 10438.49 40.78 0.00 0.00 12256.75 4774.77 182751.82 00:12:47.789 0 00:12:47.789 16:05:17 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 78310 00:12:47.789 16:05:17 -- common/autotest_common.sh@936 -- # '[' -z 78310 ']' 00:12:47.789 16:05:17 -- common/autotest_common.sh@940 -- # kill -0 78310 00:12:47.789 16:05:17 -- common/autotest_common.sh@941 -- # uname 00:12:47.789 16:05:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:47.789 16:05:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78310 00:12:47.789 killing process with pid 78310 00:12:47.789 Received shutdown signal, test time was about 10.000000 seconds 00:12:47.789 00:12:47.789 Latency(us) 00:12:47.789 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:47.789 =================================================================================================================== 00:12:47.789 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:47.789 16:05:17 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:47.789 16:05:17 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:47.789 16:05:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78310' 00:12:47.789 16:05:17 -- common/autotest_common.sh@955 -- # kill 78310 00:12:47.789 16:05:17 -- common/autotest_common.sh@960 -- # wait 78310 00:12:47.789 16:05:17 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:48.086 16:05:17 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77d9f5ac-f679-4635-aea6-cd672717a00a 00:12:48.086 16:05:17 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:12:48.349 16:05:18 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:12:48.349 16:05:18 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:12:48.349 16:05:18 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:48.607 [2024-04-15 16:05:18.395088] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:48.607 16:05:18 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77d9f5ac-f679-4635-aea6-cd672717a00a 00:12:48.607 16:05:18 -- common/autotest_common.sh@638 -- # local es=0 00:12:48.607 16:05:18 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77d9f5ac-f679-4635-aea6-cd672717a00a 00:12:48.607 16:05:18 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:48.607 16:05:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:48.607 16:05:18 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:48.607 16:05:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:48.607 16:05:18 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:48.608 16:05:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:48.608 16:05:18 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:48.608 16:05:18 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:48.608 16:05:18 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77d9f5ac-f679-4635-aea6-cd672717a00a 00:12:48.867 request: 00:12:48.867 { 00:12:48.867 "uuid": "77d9f5ac-f679-4635-aea6-cd672717a00a", 00:12:48.867 "method": "bdev_lvol_get_lvstores", 00:12:48.867 "req_id": 1 00:12:48.867 } 00:12:48.867 Got JSON-RPC error response 00:12:48.867 response: 00:12:48.867 { 00:12:48.867 "code": -19, 00:12:48.867 "message": "No such device" 00:12:48.867 } 00:12:48.867 16:05:18 -- common/autotest_common.sh@641 -- # es=1 00:12:48.867 16:05:18 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:48.867 16:05:18 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:48.867 16:05:18 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:48.867 16:05:18 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:49.126 aio_bdev 00:12:49.126 16:05:19 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 81ec77bd-e371-40d7-bee8-7b95d78560c9 00:12:49.126 16:05:19 -- common/autotest_common.sh@885 -- # local bdev_name=81ec77bd-e371-40d7-bee8-7b95d78560c9 00:12:49.126 16:05:19 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:12:49.126 16:05:19 -- common/autotest_common.sh@887 -- # local i 00:12:49.126 16:05:19 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:12:49.126 16:05:19 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:12:49.126 16:05:19 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:49.384 16:05:19 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 81ec77bd-e371-40d7-bee8-7b95d78560c9 -t 2000 00:12:49.640 [ 00:12:49.640 { 00:12:49.640 "name": "81ec77bd-e371-40d7-bee8-7b95d78560c9", 00:12:49.640 "aliases": [ 00:12:49.640 "lvs/lvol" 00:12:49.640 ], 00:12:49.640 "product_name": "Logical Volume", 00:12:49.640 "block_size": 4096, 00:12:49.640 "num_blocks": 38912, 00:12:49.640 "uuid": "81ec77bd-e371-40d7-bee8-7b95d78560c9", 00:12:49.640 "assigned_rate_limits": { 00:12:49.640 "rw_ios_per_sec": 0, 00:12:49.640 "rw_mbytes_per_sec": 0, 00:12:49.640 "r_mbytes_per_sec": 0, 00:12:49.640 "w_mbytes_per_sec": 0 00:12:49.640 }, 00:12:49.640 "claimed": false, 00:12:49.640 "zoned": false, 00:12:49.640 "supported_io_types": { 00:12:49.640 "read": true, 00:12:49.640 "write": true, 00:12:49.640 "unmap": true, 00:12:49.640 "write_zeroes": true, 00:12:49.640 "flush": false, 00:12:49.640 "reset": true, 00:12:49.640 "compare": false, 00:12:49.640 "compare_and_write": false, 00:12:49.640 "abort": false, 00:12:49.640 "nvme_admin": false, 00:12:49.640 "nvme_io": false 00:12:49.640 }, 00:12:49.640 "driver_specific": { 00:12:49.640 "lvol": { 00:12:49.641 "lvol_store_uuid": "77d9f5ac-f679-4635-aea6-cd672717a00a", 00:12:49.641 "base_bdev": "aio_bdev", 00:12:49.641 "thin_provision": false, 00:12:49.641 "snapshot": false, 00:12:49.641 "clone": false, 00:12:49.641 "esnap_clone": false 00:12:49.641 } 00:12:49.641 } 00:12:49.641 } 00:12:49.641 ] 00:12:49.641 16:05:19 -- common/autotest_common.sh@893 -- # return 0 00:12:49.641 16:05:19 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:12:49.641 16:05:19 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77d9f5ac-f679-4635-aea6-cd672717a00a 00:12:49.897 16:05:19 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:12:49.897 16:05:19 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:12:49.897 16:05:19 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 77d9f5ac-f679-4635-aea6-cd672717a00a 00:12:50.156 16:05:20 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:12:50.156 16:05:20 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 81ec77bd-e371-40d7-bee8-7b95d78560c9 00:12:50.415 16:05:20 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 77d9f5ac-f679-4635-aea6-cd672717a00a 00:12:50.673 16:05:20 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:50.931 16:05:20 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:51.189 00:12:51.189 real 0m17.670s 00:12:51.189 user 0m15.643s 00:12:51.189 sys 0m3.015s 00:12:51.189 16:05:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:51.189 16:05:21 -- common/autotest_common.sh@10 -- # set +x 00:12:51.189 ************************************ 00:12:51.189 END TEST lvs_grow_clean 00:12:51.189 ************************************ 00:12:51.447 16:05:21 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:12:51.447 16:05:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:51.447 16:05:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:51.447 16:05:21 -- common/autotest_common.sh@10 -- # set +x 00:12:51.447 ************************************ 00:12:51.447 START TEST lvs_grow_dirty 00:12:51.447 ************************************ 00:12:51.447 16:05:21 -- common/autotest_common.sh@1111 -- # lvs_grow dirty 00:12:51.447 16:05:21 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:51.448 16:05:21 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:51.448 16:05:21 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:51.448 16:05:21 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:51.448 16:05:21 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:51.448 16:05:21 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:51.448 16:05:21 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:51.448 16:05:21 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:51.448 16:05:21 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:51.705 16:05:21 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:51.705 16:05:21 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:51.963 16:05:21 -- target/nvmf_lvs_grow.sh@28 -- # lvs=bee71f55-b2f6-4012-9ab2-caf3ccdc896c 00:12:51.963 16:05:21 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:51.963 16:05:21 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bee71f55-b2f6-4012-9ab2-caf3ccdc896c 00:12:52.221 16:05:22 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:52.221 16:05:22 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:52.221 16:05:22 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u bee71f55-b2f6-4012-9ab2-caf3ccdc896c lvol 150 00:12:52.786 16:05:22 -- target/nvmf_lvs_grow.sh@33 -- # lvol=8321b861-509d-40f1-b2f6-4f35205536f9 00:12:52.786 16:05:22 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:52.786 16:05:22 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:52.786 [2024-04-15 16:05:22.654391] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:52.786 [2024-04-15 16:05:22.654734] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:52.786 true 00:12:52.786 16:05:22 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bee71f55-b2f6-4012-9ab2-caf3ccdc896c 00:12:52.786 16:05:22 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:53.044 16:05:22 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:53.044 16:05:22 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:53.302 16:05:23 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8321b861-509d-40f1-b2f6-4f35205536f9 00:12:53.560 16:05:23 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:53.818 16:05:23 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:54.076 16:05:23 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=78581 00:12:54.076 16:05:23 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:54.076 16:05:23 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:54.076 16:05:23 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 78581 /var/tmp/bdevperf.sock 00:12:54.076 16:05:23 -- common/autotest_common.sh@817 -- # '[' -z 78581 ']' 00:12:54.076 16:05:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:54.076 16:05:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:54.076 16:05:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:54.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:54.076 16:05:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:54.076 16:05:23 -- common/autotest_common.sh@10 -- # set +x 00:12:54.076 [2024-04-15 16:05:23.988433] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:12:54.076 [2024-04-15 16:05:23.988805] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78581 ] 00:12:54.334 [2024-04-15 16:05:24.132013] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:54.334 [2024-04-15 16:05:24.188649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:54.918 16:05:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:54.918 16:05:24 -- common/autotest_common.sh@850 -- # return 0 00:12:54.918 16:05:24 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:55.176 Nvme0n1 00:12:55.433 16:05:25 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:55.433 [ 00:12:55.433 { 00:12:55.433 "name": "Nvme0n1", 00:12:55.433 "aliases": [ 00:12:55.433 "8321b861-509d-40f1-b2f6-4f35205536f9" 00:12:55.433 ], 00:12:55.433 "product_name": "NVMe disk", 00:12:55.433 "block_size": 4096, 00:12:55.433 "num_blocks": 38912, 00:12:55.433 "uuid": "8321b861-509d-40f1-b2f6-4f35205536f9", 00:12:55.433 "assigned_rate_limits": { 00:12:55.433 "rw_ios_per_sec": 0, 00:12:55.433 "rw_mbytes_per_sec": 0, 00:12:55.433 "r_mbytes_per_sec": 0, 00:12:55.433 "w_mbytes_per_sec": 0 00:12:55.433 }, 00:12:55.433 "claimed": false, 00:12:55.433 "zoned": false, 00:12:55.433 "supported_io_types": { 00:12:55.433 "read": true, 00:12:55.433 "write": true, 00:12:55.433 "unmap": true, 00:12:55.433 "write_zeroes": true, 00:12:55.433 "flush": true, 00:12:55.433 "reset": true, 00:12:55.433 "compare": true, 00:12:55.433 "compare_and_write": true, 00:12:55.433 "abort": true, 00:12:55.433 "nvme_admin": true, 00:12:55.433 "nvme_io": true 00:12:55.433 }, 00:12:55.433 "memory_domains": [ 00:12:55.433 { 00:12:55.433 "dma_device_id": "system", 00:12:55.433 "dma_device_type": 1 00:12:55.433 } 00:12:55.433 ], 00:12:55.433 "driver_specific": { 00:12:55.433 "nvme": [ 00:12:55.433 { 00:12:55.433 "trid": { 00:12:55.433 "trtype": "TCP", 00:12:55.433 "adrfam": "IPv4", 00:12:55.433 "traddr": "10.0.0.2", 00:12:55.433 "trsvcid": "4420", 00:12:55.433 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:55.433 }, 00:12:55.433 "ctrlr_data": { 00:12:55.433 "cntlid": 1, 00:12:55.433 "vendor_id": "0x8086", 00:12:55.433 "model_number": "SPDK bdev Controller", 00:12:55.433 "serial_number": "SPDK0", 00:12:55.433 "firmware_revision": "24.05", 00:12:55.433 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:55.433 "oacs": { 00:12:55.433 "security": 0, 00:12:55.433 "format": 0, 00:12:55.433 "firmware": 0, 00:12:55.433 "ns_manage": 0 00:12:55.433 }, 00:12:55.433 "multi_ctrlr": true, 00:12:55.433 "ana_reporting": false 00:12:55.434 }, 00:12:55.434 "vs": { 00:12:55.434 "nvme_version": "1.3" 00:12:55.434 }, 00:12:55.434 "ns_data": { 00:12:55.434 "id": 1, 00:12:55.434 "can_share": true 00:12:55.434 } 00:12:55.434 } 00:12:55.434 ], 00:12:55.434 "mp_policy": "active_passive" 00:12:55.434 } 00:12:55.434 } 00:12:55.434 ] 00:12:55.434 16:05:25 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=78599 00:12:55.434 16:05:25 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:55.434 16:05:25 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:55.691 Running I/O for 10 seconds... 00:12:56.624 Latency(us) 00:12:56.624 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:56.624 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:56.624 Nvme0n1 : 1.00 10668.00 41.67 0.00 0.00 0.00 0.00 0.00 00:12:56.624 =================================================================================================================== 00:12:56.624 Total : 10668.00 41.67 0.00 0.00 0.00 0.00 0.00 00:12:56.624 00:12:57.628 16:05:27 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u bee71f55-b2f6-4012-9ab2-caf3ccdc896c 00:12:57.628 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:57.628 Nvme0n1 : 2.00 10461.00 40.86 0.00 0.00 0.00 0.00 0.00 00:12:57.628 =================================================================================================================== 00:12:57.628 Total : 10461.00 40.86 0.00 0.00 0.00 0.00 0.00 00:12:57.628 00:12:57.886 true 00:12:57.886 16:05:27 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bee71f55-b2f6-4012-9ab2-caf3ccdc896c 00:12:57.886 16:05:27 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:58.144 16:05:27 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:58.144 16:05:27 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:58.144 16:05:27 -- target/nvmf_lvs_grow.sh@65 -- # wait 78599 00:12:58.728 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:58.728 Nvme0n1 : 3.00 9979.67 38.98 0.00 0.00 0.00 0.00 0.00 00:12:58.728 =================================================================================================================== 00:12:58.728 Total : 9979.67 38.98 0.00 0.00 0.00 0.00 0.00 00:12:58.728 00:12:59.660 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:59.660 Nvme0n1 : 4.00 10183.50 39.78 0.00 0.00 0.00 0.00 0.00 00:12:59.660 =================================================================================================================== 00:12:59.660 Total : 10183.50 39.78 0.00 0.00 0.00 0.00 0.00 00:12:59.660 00:13:00.604 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:00.604 Nvme0n1 : 5.00 10255.00 40.06 0.00 0.00 0.00 0.00 0.00 00:13:00.604 =================================================================================================================== 00:13:00.604 Total : 10255.00 40.06 0.00 0.00 0.00 0.00 0.00 00:13:00.604 00:13:01.539 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:01.539 Nvme0n1 : 6.00 10239.17 40.00 0.00 0.00 0.00 0.00 0.00 00:13:01.539 =================================================================================================================== 00:13:01.539 Total : 10239.17 40.00 0.00 0.00 0.00 0.00 0.00 00:13:01.539 00:13:02.920 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:02.920 Nvme0n1 : 7.00 9981.29 38.99 0.00 0.00 0.00 0.00 0.00 00:13:02.920 =================================================================================================================== 00:13:02.920 Total : 9981.29 38.99 0.00 0.00 0.00 0.00 0.00 00:13:02.920 00:13:03.854 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:03.854 Nvme0n1 : 8.00 10067.12 39.32 0.00 0.00 0.00 0.00 0.00 00:13:03.854 =================================================================================================================== 00:13:03.854 Total : 10067.12 39.32 0.00 0.00 0.00 0.00 0.00 00:13:03.854 00:13:04.790 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:04.790 Nvme0n1 : 9.00 10148.00 39.64 0.00 0.00 0.00 0.00 0.00 00:13:04.790 =================================================================================================================== 00:13:04.790 Total : 10148.00 39.64 0.00 0.00 0.00 0.00 0.00 00:13:04.790 00:13:05.726 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:05.726 Nvme0n1 : 10.00 10225.40 39.94 0.00 0.00 0.00 0.00 0.00 00:13:05.726 =================================================================================================================== 00:13:05.726 Total : 10225.40 39.94 0.00 0.00 0.00 0.00 0.00 00:13:05.726 00:13:05.726 00:13:05.726 Latency(us) 00:13:05.726 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:05.726 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:05.726 Nvme0n1 : 10.01 10232.81 39.97 0.00 0.00 12504.59 3760.52 182751.82 00:13:05.726 =================================================================================================================== 00:13:05.726 Total : 10232.81 39.97 0.00 0.00 12504.59 3760.52 182751.82 00:13:05.726 0 00:13:05.726 16:05:35 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 78581 00:13:05.726 16:05:35 -- common/autotest_common.sh@936 -- # '[' -z 78581 ']' 00:13:05.726 16:05:35 -- common/autotest_common.sh@940 -- # kill -0 78581 00:13:05.726 16:05:35 -- common/autotest_common.sh@941 -- # uname 00:13:05.726 16:05:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:05.726 16:05:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78581 00:13:05.726 killing process with pid 78581 00:13:05.726 Received shutdown signal, test time was about 10.000000 seconds 00:13:05.726 00:13:05.726 Latency(us) 00:13:05.726 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:05.726 =================================================================================================================== 00:13:05.726 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:05.726 16:05:35 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:05.726 16:05:35 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:05.726 16:05:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78581' 00:13:05.726 16:05:35 -- common/autotest_common.sh@955 -- # kill 78581 00:13:05.726 16:05:35 -- common/autotest_common.sh@960 -- # wait 78581 00:13:05.985 16:05:35 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:05.985 16:05:35 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:13:05.985 16:05:35 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bee71f55-b2f6-4012-9ab2-caf3ccdc896c 00:13:06.304 16:05:36 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:13:06.304 16:05:36 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:13:06.304 16:05:36 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 78228 00:13:06.304 16:05:36 -- target/nvmf_lvs_grow.sh@74 -- # wait 78228 00:13:06.304 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 78228 Killed "${NVMF_APP[@]}" "$@" 00:13:06.304 16:05:36 -- target/nvmf_lvs_grow.sh@74 -- # true 00:13:06.304 16:05:36 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:13:06.304 16:05:36 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:06.304 16:05:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:06.304 16:05:36 -- common/autotest_common.sh@10 -- # set +x 00:13:06.563 16:05:36 -- nvmf/common.sh@470 -- # nvmfpid=78731 00:13:06.563 16:05:36 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:06.563 16:05:36 -- nvmf/common.sh@471 -- # waitforlisten 78731 00:13:06.563 16:05:36 -- common/autotest_common.sh@817 -- # '[' -z 78731 ']' 00:13:06.563 16:05:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:06.563 16:05:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:06.563 16:05:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:06.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:06.563 16:05:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:06.563 16:05:36 -- common/autotest_common.sh@10 -- # set +x 00:13:06.563 [2024-04-15 16:05:36.306792] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:13:06.563 [2024-04-15 16:05:36.307055] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:06.563 [2024-04-15 16:05:36.447208] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:06.563 [2024-04-15 16:05:36.496193] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:06.564 [2024-04-15 16:05:36.496408] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:06.564 [2024-04-15 16:05:36.496506] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:06.564 [2024-04-15 16:05:36.496556] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:06.564 [2024-04-15 16:05:36.496601] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:06.564 [2024-04-15 16:05:36.496716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.498 16:05:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:07.498 16:05:37 -- common/autotest_common.sh@850 -- # return 0 00:13:07.498 16:05:37 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:07.498 16:05:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:07.498 16:05:37 -- common/autotest_common.sh@10 -- # set +x 00:13:07.498 16:05:37 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:07.498 16:05:37 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:07.757 [2024-04-15 16:05:37.505947] blobstore.c:4779:bs_recover: *NOTICE*: Performing recovery on blobstore 00:13:07.757 [2024-04-15 16:05:37.506438] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:13:07.757 [2024-04-15 16:05:37.506723] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:13:07.757 16:05:37 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:13:07.757 16:05:37 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 8321b861-509d-40f1-b2f6-4f35205536f9 00:13:07.757 16:05:37 -- common/autotest_common.sh@885 -- # local bdev_name=8321b861-509d-40f1-b2f6-4f35205536f9 00:13:07.757 16:05:37 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:13:07.757 16:05:37 -- common/autotest_common.sh@887 -- # local i 00:13:07.757 16:05:37 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:13:07.757 16:05:37 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:13:07.757 16:05:37 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:08.016 16:05:37 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8321b861-509d-40f1-b2f6-4f35205536f9 -t 2000 00:13:08.016 [ 00:13:08.016 { 00:13:08.016 "name": "8321b861-509d-40f1-b2f6-4f35205536f9", 00:13:08.016 "aliases": [ 00:13:08.016 "lvs/lvol" 00:13:08.016 ], 00:13:08.016 "product_name": "Logical Volume", 00:13:08.016 "block_size": 4096, 00:13:08.016 "num_blocks": 38912, 00:13:08.016 "uuid": "8321b861-509d-40f1-b2f6-4f35205536f9", 00:13:08.016 "assigned_rate_limits": { 00:13:08.016 "rw_ios_per_sec": 0, 00:13:08.016 "rw_mbytes_per_sec": 0, 00:13:08.016 "r_mbytes_per_sec": 0, 00:13:08.016 "w_mbytes_per_sec": 0 00:13:08.016 }, 00:13:08.016 "claimed": false, 00:13:08.016 "zoned": false, 00:13:08.016 "supported_io_types": { 00:13:08.016 "read": true, 00:13:08.016 "write": true, 00:13:08.016 "unmap": true, 00:13:08.016 "write_zeroes": true, 00:13:08.016 "flush": false, 00:13:08.016 "reset": true, 00:13:08.016 "compare": false, 00:13:08.016 "compare_and_write": false, 00:13:08.016 "abort": false, 00:13:08.016 "nvme_admin": false, 00:13:08.016 "nvme_io": false 00:13:08.016 }, 00:13:08.016 "driver_specific": { 00:13:08.016 "lvol": { 00:13:08.016 "lvol_store_uuid": "bee71f55-b2f6-4012-9ab2-caf3ccdc896c", 00:13:08.016 "base_bdev": "aio_bdev", 00:13:08.016 "thin_provision": false, 00:13:08.016 "snapshot": false, 00:13:08.016 "clone": false, 00:13:08.016 "esnap_clone": false 00:13:08.016 } 00:13:08.016 } 00:13:08.016 } 00:13:08.016 ] 00:13:08.016 16:05:37 -- common/autotest_common.sh@893 -- # return 0 00:13:08.016 16:05:37 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bee71f55-b2f6-4012-9ab2-caf3ccdc896c 00:13:08.016 16:05:37 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:13:08.274 16:05:38 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:13:08.274 16:05:38 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:13:08.274 16:05:38 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bee71f55-b2f6-4012-9ab2-caf3ccdc896c 00:13:08.533 16:05:38 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:13:08.533 16:05:38 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:08.792 [2024-04-15 16:05:38.599381] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:08.792 16:05:38 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bee71f55-b2f6-4012-9ab2-caf3ccdc896c 00:13:08.792 16:05:38 -- common/autotest_common.sh@638 -- # local es=0 00:13:08.792 16:05:38 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bee71f55-b2f6-4012-9ab2-caf3ccdc896c 00:13:08.792 16:05:38 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:08.792 16:05:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:08.792 16:05:38 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:08.792 16:05:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:08.792 16:05:38 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:08.792 16:05:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:08.792 16:05:38 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:08.792 16:05:38 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:08.792 16:05:38 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bee71f55-b2f6-4012-9ab2-caf3ccdc896c 00:13:09.050 request: 00:13:09.051 { 00:13:09.051 "uuid": "bee71f55-b2f6-4012-9ab2-caf3ccdc896c", 00:13:09.051 "method": "bdev_lvol_get_lvstores", 00:13:09.051 "req_id": 1 00:13:09.051 } 00:13:09.051 Got JSON-RPC error response 00:13:09.051 response: 00:13:09.051 { 00:13:09.051 "code": -19, 00:13:09.051 "message": "No such device" 00:13:09.051 } 00:13:09.051 16:05:38 -- common/autotest_common.sh@641 -- # es=1 00:13:09.051 16:05:38 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:09.051 16:05:38 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:09.051 16:05:38 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:09.051 16:05:38 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:09.309 aio_bdev 00:13:09.309 16:05:39 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 8321b861-509d-40f1-b2f6-4f35205536f9 00:13:09.309 16:05:39 -- common/autotest_common.sh@885 -- # local bdev_name=8321b861-509d-40f1-b2f6-4f35205536f9 00:13:09.309 16:05:39 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:13:09.309 16:05:39 -- common/autotest_common.sh@887 -- # local i 00:13:09.309 16:05:39 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:13:09.309 16:05:39 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:13:09.309 16:05:39 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:09.567 16:05:39 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8321b861-509d-40f1-b2f6-4f35205536f9 -t 2000 00:13:09.567 [ 00:13:09.567 { 00:13:09.567 "name": "8321b861-509d-40f1-b2f6-4f35205536f9", 00:13:09.567 "aliases": [ 00:13:09.567 "lvs/lvol" 00:13:09.567 ], 00:13:09.567 "product_name": "Logical Volume", 00:13:09.567 "block_size": 4096, 00:13:09.567 "num_blocks": 38912, 00:13:09.567 "uuid": "8321b861-509d-40f1-b2f6-4f35205536f9", 00:13:09.567 "assigned_rate_limits": { 00:13:09.567 "rw_ios_per_sec": 0, 00:13:09.567 "rw_mbytes_per_sec": 0, 00:13:09.567 "r_mbytes_per_sec": 0, 00:13:09.567 "w_mbytes_per_sec": 0 00:13:09.567 }, 00:13:09.567 "claimed": false, 00:13:09.567 "zoned": false, 00:13:09.567 "supported_io_types": { 00:13:09.567 "read": true, 00:13:09.567 "write": true, 00:13:09.567 "unmap": true, 00:13:09.567 "write_zeroes": true, 00:13:09.567 "flush": false, 00:13:09.567 "reset": true, 00:13:09.567 "compare": false, 00:13:09.567 "compare_and_write": false, 00:13:09.567 "abort": false, 00:13:09.567 "nvme_admin": false, 00:13:09.567 "nvme_io": false 00:13:09.567 }, 00:13:09.567 "driver_specific": { 00:13:09.567 "lvol": { 00:13:09.567 "lvol_store_uuid": "bee71f55-b2f6-4012-9ab2-caf3ccdc896c", 00:13:09.567 "base_bdev": "aio_bdev", 00:13:09.567 "thin_provision": false, 00:13:09.567 "snapshot": false, 00:13:09.567 "clone": false, 00:13:09.567 "esnap_clone": false 00:13:09.567 } 00:13:09.567 } 00:13:09.567 } 00:13:09.567 ] 00:13:09.567 16:05:39 -- common/autotest_common.sh@893 -- # return 0 00:13:09.567 16:05:39 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bee71f55-b2f6-4012-9ab2-caf3ccdc896c 00:13:09.567 16:05:39 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:13:10.135 16:05:39 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:13:10.135 16:05:39 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bee71f55-b2f6-4012-9ab2-caf3ccdc896c 00:13:10.135 16:05:39 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:13:10.393 16:05:40 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:13:10.393 16:05:40 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 8321b861-509d-40f1-b2f6-4f35205536f9 00:13:10.652 16:05:40 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bee71f55-b2f6-4012-9ab2-caf3ccdc896c 00:13:10.911 16:05:40 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:10.911 16:05:40 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:11.478 00:13:11.478 real 0m20.026s 00:13:11.478 user 0m42.964s 00:13:11.478 sys 0m9.249s 00:13:11.478 16:05:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:11.478 16:05:41 -- common/autotest_common.sh@10 -- # set +x 00:13:11.478 ************************************ 00:13:11.478 END TEST lvs_grow_dirty 00:13:11.478 ************************************ 00:13:11.478 16:05:41 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:13:11.478 16:05:41 -- common/autotest_common.sh@794 -- # type=--id 00:13:11.478 16:05:41 -- common/autotest_common.sh@795 -- # id=0 00:13:11.478 16:05:41 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:13:11.478 16:05:41 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:11.478 16:05:41 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:13:11.478 16:05:41 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:13:11.478 16:05:41 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:13:11.478 16:05:41 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:11.478 nvmf_trace.0 00:13:11.478 16:05:41 -- common/autotest_common.sh@809 -- # return 0 00:13:11.478 16:05:41 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:13:11.478 16:05:41 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:11.478 16:05:41 -- nvmf/common.sh@117 -- # sync 00:13:11.735 16:05:41 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:11.735 16:05:41 -- nvmf/common.sh@120 -- # set +e 00:13:11.735 16:05:41 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:11.735 16:05:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:11.735 rmmod nvme_tcp 00:13:11.735 rmmod nvme_fabrics 00:13:11.735 16:05:41 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:11.735 16:05:41 -- nvmf/common.sh@124 -- # set -e 00:13:11.735 16:05:41 -- nvmf/common.sh@125 -- # return 0 00:13:11.735 16:05:41 -- nvmf/common.sh@478 -- # '[' -n 78731 ']' 00:13:11.735 16:05:41 -- nvmf/common.sh@479 -- # killprocess 78731 00:13:11.735 16:05:41 -- common/autotest_common.sh@936 -- # '[' -z 78731 ']' 00:13:11.735 16:05:41 -- common/autotest_common.sh@940 -- # kill -0 78731 00:13:11.735 16:05:41 -- common/autotest_common.sh@941 -- # uname 00:13:11.735 16:05:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:11.735 16:05:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78731 00:13:11.736 16:05:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:11.736 16:05:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:11.736 16:05:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78731' 00:13:11.736 killing process with pid 78731 00:13:11.736 16:05:41 -- common/autotest_common.sh@955 -- # kill 78731 00:13:11.736 16:05:41 -- common/autotest_common.sh@960 -- # wait 78731 00:13:11.993 16:05:41 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:11.993 16:05:41 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:11.993 16:05:41 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:11.993 16:05:41 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:11.993 16:05:41 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:11.993 16:05:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:11.993 16:05:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:11.993 16:05:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:11.993 16:05:41 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:11.993 00:13:11.993 real 0m40.401s 00:13:11.993 user 1m4.796s 00:13:11.993 sys 0m13.055s 00:13:11.993 16:05:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:11.993 16:05:41 -- common/autotest_common.sh@10 -- # set +x 00:13:11.993 ************************************ 00:13:11.993 END TEST nvmf_lvs_grow 00:13:11.993 ************************************ 00:13:11.993 16:05:41 -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:13:11.993 16:05:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:11.993 16:05:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:11.993 16:05:41 -- common/autotest_common.sh@10 -- # set +x 00:13:12.252 ************************************ 00:13:12.252 START TEST nvmf_bdev_io_wait 00:13:12.252 ************************************ 00:13:12.252 16:05:41 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:13:12.252 * Looking for test storage... 00:13:12.252 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:12.252 16:05:42 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:12.252 16:05:42 -- nvmf/common.sh@7 -- # uname -s 00:13:12.252 16:05:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:12.252 16:05:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:12.252 16:05:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:12.252 16:05:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:12.252 16:05:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:12.252 16:05:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:12.252 16:05:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:12.252 16:05:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:12.252 16:05:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:12.252 16:05:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:12.252 16:05:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5cf30adc-e0ee-4255-9853-178d7d30983a 00:13:12.252 16:05:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=5cf30adc-e0ee-4255-9853-178d7d30983a 00:13:12.252 16:05:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:12.252 16:05:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:12.252 16:05:42 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:12.252 16:05:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:12.252 16:05:42 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:12.252 16:05:42 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:12.252 16:05:42 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:12.252 16:05:42 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:12.252 16:05:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.252 16:05:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.252 16:05:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.252 16:05:42 -- paths/export.sh@5 -- # export PATH 00:13:12.252 16:05:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.252 16:05:42 -- nvmf/common.sh@47 -- # : 0 00:13:12.252 16:05:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:12.252 16:05:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:12.252 16:05:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:12.252 16:05:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:12.252 16:05:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:12.252 16:05:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:12.252 16:05:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:12.252 16:05:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:12.252 16:05:42 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:12.252 16:05:42 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:12.252 16:05:42 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:13:12.252 16:05:42 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:12.252 16:05:42 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:12.252 16:05:42 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:12.252 16:05:42 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:12.252 16:05:42 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:12.252 16:05:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:12.252 16:05:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:12.252 16:05:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:12.252 16:05:42 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:13:12.252 16:05:42 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:13:12.252 16:05:42 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:13:12.253 16:05:42 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:13:12.253 16:05:42 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:13:12.253 16:05:42 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:13:12.253 16:05:42 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:12.253 16:05:42 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:12.253 16:05:42 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:12.253 16:05:42 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:12.253 16:05:42 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:12.253 16:05:42 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:12.253 16:05:42 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:12.253 16:05:42 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:12.253 16:05:42 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:12.253 16:05:42 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:12.253 16:05:42 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:12.253 16:05:42 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:12.253 16:05:42 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:12.253 16:05:42 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:12.253 Cannot find device "nvmf_tgt_br" 00:13:12.253 16:05:42 -- nvmf/common.sh@155 -- # true 00:13:12.253 16:05:42 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:12.253 Cannot find device "nvmf_tgt_br2" 00:13:12.253 16:05:42 -- nvmf/common.sh@156 -- # true 00:13:12.253 16:05:42 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:12.253 16:05:42 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:12.253 Cannot find device "nvmf_tgt_br" 00:13:12.253 16:05:42 -- nvmf/common.sh@158 -- # true 00:13:12.253 16:05:42 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:12.253 Cannot find device "nvmf_tgt_br2" 00:13:12.253 16:05:42 -- nvmf/common.sh@159 -- # true 00:13:12.253 16:05:42 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:12.510 16:05:42 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:12.510 16:05:42 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:12.510 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:12.510 16:05:42 -- nvmf/common.sh@162 -- # true 00:13:12.510 16:05:42 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:12.510 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:12.510 16:05:42 -- nvmf/common.sh@163 -- # true 00:13:12.510 16:05:42 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:12.510 16:05:42 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:12.510 16:05:42 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:12.510 16:05:42 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:12.510 16:05:42 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:12.510 16:05:42 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:12.510 16:05:42 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:12.510 16:05:42 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:12.510 16:05:42 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:12.510 16:05:42 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:12.510 16:05:42 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:12.510 16:05:42 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:12.510 16:05:42 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:12.510 16:05:42 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:12.510 16:05:42 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:12.510 16:05:42 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:12.510 16:05:42 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:12.510 16:05:42 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:12.510 16:05:42 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:12.510 16:05:42 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:12.510 16:05:42 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:12.510 16:05:42 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:12.510 16:05:42 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:12.510 16:05:42 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:12.510 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:12.510 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:13:12.510 00:13:12.510 --- 10.0.0.2 ping statistics --- 00:13:12.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.510 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:13:12.510 16:05:42 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:12.510 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:12.510 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.096 ms 00:13:12.510 00:13:12.510 --- 10.0.0.3 ping statistics --- 00:13:12.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.510 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:13:12.510 16:05:42 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:12.767 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:12.767 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:13:12.767 00:13:12.767 --- 10.0.0.1 ping statistics --- 00:13:12.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.767 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:13:12.767 16:05:42 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:12.767 16:05:42 -- nvmf/common.sh@422 -- # return 0 00:13:12.767 16:05:42 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:12.767 16:05:42 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:12.767 16:05:42 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:12.767 16:05:42 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:12.767 16:05:42 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:12.767 16:05:42 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:12.767 16:05:42 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:12.767 16:05:42 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:13:12.767 16:05:42 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:12.767 16:05:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:12.767 16:05:42 -- common/autotest_common.sh@10 -- # set +x 00:13:12.767 16:05:42 -- nvmf/common.sh@470 -- # nvmfpid=79055 00:13:12.767 16:05:42 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:13:12.767 16:05:42 -- nvmf/common.sh@471 -- # waitforlisten 79055 00:13:12.767 16:05:42 -- common/autotest_common.sh@817 -- # '[' -z 79055 ']' 00:13:12.767 16:05:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:12.767 16:05:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:12.767 16:05:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:12.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:12.767 16:05:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:12.767 16:05:42 -- common/autotest_common.sh@10 -- # set +x 00:13:12.767 [2024-04-15 16:05:42.560816] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:13:12.767 [2024-04-15 16:05:42.560918] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:12.767 [2024-04-15 16:05:42.705879] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:13.025 [2024-04-15 16:05:42.762890] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:13.025 [2024-04-15 16:05:42.762955] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:13.025 [2024-04-15 16:05:42.762970] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:13.025 [2024-04-15 16:05:42.762983] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:13.025 [2024-04-15 16:05:42.762994] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:13.025 [2024-04-15 16:05:42.763128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:13.025 [2024-04-15 16:05:42.763289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:13.025 [2024-04-15 16:05:42.764267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:13.025 [2024-04-15 16:05:42.764271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.590 16:05:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:13.590 16:05:43 -- common/autotest_common.sh@850 -- # return 0 00:13:13.590 16:05:43 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:13.590 16:05:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:13.590 16:05:43 -- common/autotest_common.sh@10 -- # set +x 00:13:13.590 16:05:43 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:13.590 16:05:43 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:13:13.590 16:05:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:13.590 16:05:43 -- common/autotest_common.sh@10 -- # set +x 00:13:13.590 16:05:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:13.590 16:05:43 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:13:13.590 16:05:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:13.590 16:05:43 -- common/autotest_common.sh@10 -- # set +x 00:13:13.848 16:05:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:13.848 16:05:43 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:13.848 16:05:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:13.848 16:05:43 -- common/autotest_common.sh@10 -- # set +x 00:13:13.848 [2024-04-15 16:05:43.578250] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:13.848 16:05:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:13.848 16:05:43 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:13.848 16:05:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:13.848 16:05:43 -- common/autotest_common.sh@10 -- # set +x 00:13:13.848 Malloc0 00:13:13.848 16:05:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:13.848 16:05:43 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:13.848 16:05:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:13.848 16:05:43 -- common/autotest_common.sh@10 -- # set +x 00:13:13.848 16:05:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:13.848 16:05:43 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:13.848 16:05:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:13.848 16:05:43 -- common/autotest_common.sh@10 -- # set +x 00:13:13.848 16:05:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:13.848 16:05:43 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:13.848 16:05:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:13.848 16:05:43 -- common/autotest_common.sh@10 -- # set +x 00:13:13.848 [2024-04-15 16:05:43.638339] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:13.848 16:05:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:13.848 16:05:43 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=79090 00:13:13.848 16:05:43 -- target/bdev_io_wait.sh@30 -- # READ_PID=79092 00:13:13.848 16:05:43 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:13:13.848 16:05:43 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:13:13.848 16:05:43 -- nvmf/common.sh@521 -- # config=() 00:13:13.848 16:05:43 -- nvmf/common.sh@521 -- # local subsystem config 00:13:13.848 16:05:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:13:13.848 16:05:43 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=79094 00:13:13.848 16:05:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:13:13.848 { 00:13:13.848 "params": { 00:13:13.848 "name": "Nvme$subsystem", 00:13:13.848 "trtype": "$TEST_TRANSPORT", 00:13:13.848 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:13.848 "adrfam": "ipv4", 00:13:13.848 "trsvcid": "$NVMF_PORT", 00:13:13.848 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:13.848 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:13.848 "hdgst": ${hdgst:-false}, 00:13:13.848 "ddgst": ${ddgst:-false} 00:13:13.848 }, 00:13:13.848 "method": "bdev_nvme_attach_controller" 00:13:13.848 } 00:13:13.848 EOF 00:13:13.848 )") 00:13:13.848 16:05:43 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:13:13.848 16:05:43 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:13:13.848 16:05:43 -- nvmf/common.sh@521 -- # config=() 00:13:13.848 16:05:43 -- nvmf/common.sh@521 -- # local subsystem config 00:13:13.848 16:05:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:13:13.848 16:05:43 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=79095 00:13:13.848 16:05:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:13:13.848 { 00:13:13.848 "params": { 00:13:13.848 "name": "Nvme$subsystem", 00:13:13.848 "trtype": "$TEST_TRANSPORT", 00:13:13.848 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:13.848 "adrfam": "ipv4", 00:13:13.848 "trsvcid": "$NVMF_PORT", 00:13:13.848 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:13.848 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:13.848 "hdgst": ${hdgst:-false}, 00:13:13.848 "ddgst": ${ddgst:-false} 00:13:13.848 }, 00:13:13.848 "method": "bdev_nvme_attach_controller" 00:13:13.848 } 00:13:13.848 EOF 00:13:13.848 )") 00:13:13.848 16:05:43 -- target/bdev_io_wait.sh@35 -- # sync 00:13:13.848 16:05:43 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:13:13.848 16:05:43 -- nvmf/common.sh@543 -- # cat 00:13:13.848 16:05:43 -- nvmf/common.sh@543 -- # cat 00:13:13.848 16:05:43 -- nvmf/common.sh@545 -- # jq . 00:13:13.848 16:05:43 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:13:13.848 16:05:43 -- nvmf/common.sh@545 -- # jq . 00:13:13.848 16:05:43 -- nvmf/common.sh@546 -- # IFS=, 00:13:13.848 16:05:43 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:13:13.848 "params": { 00:13:13.848 "name": "Nvme1", 00:13:13.848 "trtype": "tcp", 00:13:13.848 "traddr": "10.0.0.2", 00:13:13.848 "adrfam": "ipv4", 00:13:13.848 "trsvcid": "4420", 00:13:13.848 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:13.848 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:13.848 "hdgst": false, 00:13:13.848 "ddgst": false 00:13:13.848 }, 00:13:13.848 "method": "bdev_nvme_attach_controller" 00:13:13.848 }' 00:13:13.848 16:05:43 -- nvmf/common.sh@546 -- # IFS=, 00:13:13.848 16:05:43 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:13:13.848 "params": { 00:13:13.848 "name": "Nvme1", 00:13:13.848 "trtype": "tcp", 00:13:13.848 "traddr": "10.0.0.2", 00:13:13.848 "adrfam": "ipv4", 00:13:13.848 "trsvcid": "4420", 00:13:13.848 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:13.849 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:13.849 "hdgst": false, 00:13:13.849 "ddgst": false 00:13:13.849 }, 00:13:13.849 "method": "bdev_nvme_attach_controller" 00:13:13.849 }' 00:13:13.849 16:05:43 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:13:13.849 16:05:43 -- nvmf/common.sh@521 -- # config=() 00:13:13.849 16:05:43 -- nvmf/common.sh@521 -- # local subsystem config 00:13:13.849 16:05:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:13:13.849 16:05:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:13:13.849 { 00:13:13.849 "params": { 00:13:13.849 "name": "Nvme$subsystem", 00:13:13.849 "trtype": "$TEST_TRANSPORT", 00:13:13.849 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:13.849 "adrfam": "ipv4", 00:13:13.849 "trsvcid": "$NVMF_PORT", 00:13:13.849 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:13.849 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:13.849 "hdgst": ${hdgst:-false}, 00:13:13.849 "ddgst": ${ddgst:-false} 00:13:13.849 }, 00:13:13.849 "method": "bdev_nvme_attach_controller" 00:13:13.849 } 00:13:13.849 EOF 00:13:13.849 )") 00:13:13.849 16:05:43 -- nvmf/common.sh@543 -- # cat 00:13:13.849 16:05:43 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:13:13.849 16:05:43 -- nvmf/common.sh@521 -- # config=() 00:13:13.849 16:05:43 -- nvmf/common.sh@521 -- # local subsystem config 00:13:13.849 16:05:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:13:13.849 16:05:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:13:13.849 { 00:13:13.849 "params": { 00:13:13.849 "name": "Nvme$subsystem", 00:13:13.849 "trtype": "$TEST_TRANSPORT", 00:13:13.849 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:13.849 "adrfam": "ipv4", 00:13:13.849 "trsvcid": "$NVMF_PORT", 00:13:13.849 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:13.849 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:13.849 "hdgst": ${hdgst:-false}, 00:13:13.849 "ddgst": ${ddgst:-false} 00:13:13.849 }, 00:13:13.849 "method": "bdev_nvme_attach_controller" 00:13:13.849 } 00:13:13.849 EOF 00:13:13.849 )") 00:13:13.849 16:05:43 -- nvmf/common.sh@545 -- # jq . 00:13:13.849 16:05:43 -- nvmf/common.sh@543 -- # cat 00:13:13.849 16:05:43 -- nvmf/common.sh@546 -- # IFS=, 00:13:13.849 16:05:43 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:13:13.849 "params": { 00:13:13.849 "name": "Nvme1", 00:13:13.849 "trtype": "tcp", 00:13:13.849 "traddr": "10.0.0.2", 00:13:13.849 "adrfam": "ipv4", 00:13:13.849 "trsvcid": "4420", 00:13:13.849 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:13.849 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:13.849 "hdgst": false, 00:13:13.849 "ddgst": false 00:13:13.849 }, 00:13:13.849 "method": "bdev_nvme_attach_controller" 00:13:13.849 }' 00:13:13.849 16:05:43 -- nvmf/common.sh@545 -- # jq . 00:13:13.849 16:05:43 -- target/bdev_io_wait.sh@37 -- # wait 79090 00:13:13.849 16:05:43 -- nvmf/common.sh@546 -- # IFS=, 00:13:13.849 16:05:43 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:13:13.849 "params": { 00:13:13.849 "name": "Nvme1", 00:13:13.849 "trtype": "tcp", 00:13:13.849 "traddr": "10.0.0.2", 00:13:13.849 "adrfam": "ipv4", 00:13:13.849 "trsvcid": "4420", 00:13:13.849 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:13.849 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:13.849 "hdgst": false, 00:13:13.849 "ddgst": false 00:13:13.849 }, 00:13:13.849 "method": "bdev_nvme_attach_controller" 00:13:13.849 }' 00:13:13.849 [2024-04-15 16:05:43.696108] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:13:13.849 [2024-04-15 16:05:43.696178] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:13:13.849 [2024-04-15 16:05:43.698094] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:13:13.849 [2024-04-15 16:05:43.698163] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:13:13.849 [2024-04-15 16:05:43.699484] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:13:13.849 [2024-04-15 16:05:43.699696] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:13:13.849 [2024-04-15 16:05:43.703820] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:13:13.849 [2024-04-15 16:05:43.704144] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:13:14.107 [2024-04-15 16:05:43.890177] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:14.107 [2024-04-15 16:05:43.917555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:14.107 [2024-04-15 16:05:43.926376] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:13:14.107 [2024-04-15 16:05:43.947665] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:14.107 [2024-04-15 16:05:43.979496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:13:14.107 [2024-04-15 16:05:43.988229] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:13:14.107 [2024-04-15 16:05:44.012478] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:14.107 [2024-04-15 16:05:44.041290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:13:14.107 [2024-04-15 16:05:44.050168] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:13:14.365 [2024-04-15 16:05:44.077872] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:14.365 Running I/O for 1 seconds... 00:13:14.365 [2024-04-15 16:05:44.108019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:13:14.365 Running I/O for 1 seconds... 00:13:14.365 [2024-04-15 16:05:44.117064] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:13:14.365 Running I/O for 1 seconds... 00:13:14.365 Running I/O for 1 seconds... 00:13:15.299 00:13:15.299 Latency(us) 00:13:15.299 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:15.299 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:13:15.299 Nvme1n1 : 1.01 9029.09 35.27 0.00 0.00 14106.75 9112.62 20472.20 00:13:15.299 =================================================================================================================== 00:13:15.299 Total : 9029.09 35.27 0.00 0.00 14106.75 9112.62 20472.20 00:13:15.299 00:13:15.299 Latency(us) 00:13:15.299 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:15.299 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:13:15.299 Nvme1n1 : 1.01 8515.47 33.26 0.00 0.00 14966.26 6865.68 25839.91 00:13:15.299 =================================================================================================================== 00:13:15.299 Total : 8515.47 33.26 0.00 0.00 14966.26 6865.68 25839.91 00:13:15.299 00:13:15.299 Latency(us) 00:13:15.300 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:15.300 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:13:15.300 Nvme1n1 : 1.00 184218.37 719.60 0.00 0.00 692.66 315.98 1006.45 00:13:15.300 =================================================================================================================== 00:13:15.300 Total : 184218.37 719.60 0.00 0.00 692.66 315.98 1006.45 00:13:15.558 00:13:15.558 Latency(us) 00:13:15.558 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:15.558 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:13:15.558 Nvme1n1 : 1.01 8786.42 34.32 0.00 0.00 14513.00 6678.43 26464.06 00:13:15.558 =================================================================================================================== 00:13:15.558 Total : 8786.42 34.32 0.00 0.00 14513.00 6678.43 26464.06 00:13:15.558 16:05:45 -- target/bdev_io_wait.sh@38 -- # wait 79092 00:13:15.558 16:05:45 -- target/bdev_io_wait.sh@39 -- # wait 79094 00:13:15.558 16:05:45 -- target/bdev_io_wait.sh@40 -- # wait 79095 00:13:15.558 16:05:45 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:15.558 16:05:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:15.558 16:05:45 -- common/autotest_common.sh@10 -- # set +x 00:13:15.558 16:05:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:15.558 16:05:45 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:13:15.558 16:05:45 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:13:15.558 16:05:45 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:15.558 16:05:45 -- nvmf/common.sh@117 -- # sync 00:13:15.817 16:05:45 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:15.817 16:05:45 -- nvmf/common.sh@120 -- # set +e 00:13:15.817 16:05:45 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:15.817 16:05:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:15.817 rmmod nvme_tcp 00:13:15.817 rmmod nvme_fabrics 00:13:15.817 16:05:45 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:15.817 16:05:45 -- nvmf/common.sh@124 -- # set -e 00:13:15.817 16:05:45 -- nvmf/common.sh@125 -- # return 0 00:13:15.817 16:05:45 -- nvmf/common.sh@478 -- # '[' -n 79055 ']' 00:13:15.817 16:05:45 -- nvmf/common.sh@479 -- # killprocess 79055 00:13:15.817 16:05:45 -- common/autotest_common.sh@936 -- # '[' -z 79055 ']' 00:13:15.817 16:05:45 -- common/autotest_common.sh@940 -- # kill -0 79055 00:13:15.817 16:05:45 -- common/autotest_common.sh@941 -- # uname 00:13:15.817 16:05:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:15.817 16:05:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79055 00:13:15.817 16:05:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:15.817 16:05:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:15.817 killing process with pid 79055 00:13:15.817 16:05:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79055' 00:13:15.817 16:05:45 -- common/autotest_common.sh@955 -- # kill 79055 00:13:15.817 16:05:45 -- common/autotest_common.sh@960 -- # wait 79055 00:13:15.817 16:05:45 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:15.817 16:05:45 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:15.817 16:05:45 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:15.817 16:05:45 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:15.817 16:05:45 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:15.817 16:05:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:15.817 16:05:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:15.817 16:05:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:16.077 16:05:45 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:16.077 00:13:16.077 real 0m3.853s 00:13:16.077 user 0m16.209s 00:13:16.077 sys 0m2.271s 00:13:16.077 16:05:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:16.077 ************************************ 00:13:16.077 END TEST nvmf_bdev_io_wait 00:13:16.077 16:05:45 -- common/autotest_common.sh@10 -- # set +x 00:13:16.077 ************************************ 00:13:16.077 16:05:45 -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:13:16.077 16:05:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:16.077 16:05:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:16.077 16:05:45 -- common/autotest_common.sh@10 -- # set +x 00:13:16.077 ************************************ 00:13:16.077 START TEST nvmf_queue_depth 00:13:16.077 ************************************ 00:13:16.077 16:05:45 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:13:16.077 * Looking for test storage... 00:13:16.077 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:16.077 16:05:46 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:16.077 16:05:46 -- nvmf/common.sh@7 -- # uname -s 00:13:16.077 16:05:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:16.077 16:05:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:16.077 16:05:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:16.077 16:05:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:16.077 16:05:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:16.077 16:05:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:16.077 16:05:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:16.077 16:05:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:16.077 16:05:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:16.077 16:05:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:16.336 16:05:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5cf30adc-e0ee-4255-9853-178d7d30983a 00:13:16.336 16:05:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=5cf30adc-e0ee-4255-9853-178d7d30983a 00:13:16.336 16:05:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:16.336 16:05:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:16.336 16:05:46 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:16.336 16:05:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:16.336 16:05:46 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:16.336 16:05:46 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:16.336 16:05:46 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:16.336 16:05:46 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:16.336 16:05:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.336 16:05:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.336 16:05:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.336 16:05:46 -- paths/export.sh@5 -- # export PATH 00:13:16.336 16:05:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.337 16:05:46 -- nvmf/common.sh@47 -- # : 0 00:13:16.337 16:05:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:16.337 16:05:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:16.337 16:05:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:16.337 16:05:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:16.337 16:05:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:16.337 16:05:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:16.337 16:05:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:16.337 16:05:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:16.337 16:05:46 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:13:16.337 16:05:46 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:13:16.337 16:05:46 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:16.337 16:05:46 -- target/queue_depth.sh@19 -- # nvmftestinit 00:13:16.337 16:05:46 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:16.337 16:05:46 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:16.337 16:05:46 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:16.337 16:05:46 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:16.337 16:05:46 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:16.337 16:05:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:16.337 16:05:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:16.337 16:05:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:16.337 16:05:46 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:13:16.337 16:05:46 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:13:16.337 16:05:46 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:13:16.337 16:05:46 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:13:16.337 16:05:46 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:13:16.337 16:05:46 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:13:16.337 16:05:46 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:16.337 16:05:46 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:16.337 16:05:46 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:16.337 16:05:46 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:16.337 16:05:46 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:16.337 16:05:46 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:16.337 16:05:46 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:16.337 16:05:46 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:16.337 16:05:46 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:16.337 16:05:46 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:16.337 16:05:46 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:16.337 16:05:46 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:16.337 16:05:46 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:16.337 16:05:46 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:16.337 Cannot find device "nvmf_tgt_br" 00:13:16.337 16:05:46 -- nvmf/common.sh@155 -- # true 00:13:16.337 16:05:46 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:16.337 Cannot find device "nvmf_tgt_br2" 00:13:16.337 16:05:46 -- nvmf/common.sh@156 -- # true 00:13:16.337 16:05:46 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:16.337 16:05:46 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:16.337 Cannot find device "nvmf_tgt_br" 00:13:16.337 16:05:46 -- nvmf/common.sh@158 -- # true 00:13:16.337 16:05:46 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:16.337 Cannot find device "nvmf_tgt_br2" 00:13:16.337 16:05:46 -- nvmf/common.sh@159 -- # true 00:13:16.337 16:05:46 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:16.337 16:05:46 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:16.337 16:05:46 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:16.337 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:16.337 16:05:46 -- nvmf/common.sh@162 -- # true 00:13:16.337 16:05:46 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:16.337 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:16.337 16:05:46 -- nvmf/common.sh@163 -- # true 00:13:16.337 16:05:46 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:16.337 16:05:46 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:16.337 16:05:46 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:16.337 16:05:46 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:16.337 16:05:46 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:16.337 16:05:46 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:16.337 16:05:46 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:16.337 16:05:46 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:16.337 16:05:46 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:16.337 16:05:46 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:16.337 16:05:46 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:16.337 16:05:46 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:16.337 16:05:46 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:16.337 16:05:46 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:16.337 16:05:46 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:16.597 16:05:46 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:16.597 16:05:46 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:16.597 16:05:46 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:16.597 16:05:46 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:16.597 16:05:46 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:16.597 16:05:46 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:16.597 16:05:46 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:16.597 16:05:46 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:16.597 16:05:46 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:16.597 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:16.597 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:13:16.597 00:13:16.597 --- 10.0.0.2 ping statistics --- 00:13:16.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.597 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:13:16.597 16:05:46 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:16.597 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:16.597 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:13:16.597 00:13:16.597 --- 10.0.0.3 ping statistics --- 00:13:16.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.597 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:13:16.597 16:05:46 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:16.597 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:16.597 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:13:16.597 00:13:16.597 --- 10.0.0.1 ping statistics --- 00:13:16.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.597 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:13:16.597 16:05:46 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:16.597 16:05:46 -- nvmf/common.sh@422 -- # return 0 00:13:16.597 16:05:46 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:16.597 16:05:46 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:16.597 16:05:46 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:16.597 16:05:46 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:16.597 16:05:46 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:16.597 16:05:46 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:16.597 16:05:46 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:16.597 16:05:46 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:13:16.597 16:05:46 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:16.597 16:05:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:16.597 16:05:46 -- common/autotest_common.sh@10 -- # set +x 00:13:16.597 16:05:46 -- nvmf/common.sh@470 -- # nvmfpid=79335 00:13:16.597 16:05:46 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:16.597 16:05:46 -- nvmf/common.sh@471 -- # waitforlisten 79335 00:13:16.597 16:05:46 -- common/autotest_common.sh@817 -- # '[' -z 79335 ']' 00:13:16.597 16:05:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.597 16:05:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:16.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.597 16:05:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.597 16:05:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:16.597 16:05:46 -- common/autotest_common.sh@10 -- # set +x 00:13:16.597 [2024-04-15 16:05:46.463020] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:13:16.597 [2024-04-15 16:05:46.463590] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:16.856 [2024-04-15 16:05:46.604353] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.856 [2024-04-15 16:05:46.654381] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:16.856 [2024-04-15 16:05:46.654435] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:16.856 [2024-04-15 16:05:46.654447] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:16.856 [2024-04-15 16:05:46.654457] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:16.856 [2024-04-15 16:05:46.654466] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:16.856 [2024-04-15 16:05:46.654496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:16.856 16:05:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:16.856 16:05:46 -- common/autotest_common.sh@850 -- # return 0 00:13:16.856 16:05:46 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:16.856 16:05:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:16.856 16:05:46 -- common/autotest_common.sh@10 -- # set +x 00:13:16.856 16:05:46 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:16.856 16:05:46 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:16.856 16:05:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:16.856 16:05:46 -- common/autotest_common.sh@10 -- # set +x 00:13:16.856 [2024-04-15 16:05:46.804654] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:16.856 16:05:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:16.856 16:05:46 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:16.856 16:05:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:16.856 16:05:46 -- common/autotest_common.sh@10 -- # set +x 00:13:17.115 Malloc0 00:13:17.115 16:05:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:17.115 16:05:46 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:17.115 16:05:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:17.115 16:05:46 -- common/autotest_common.sh@10 -- # set +x 00:13:17.115 16:05:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:17.115 16:05:46 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:17.115 16:05:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:17.115 16:05:46 -- common/autotest_common.sh@10 -- # set +x 00:13:17.115 16:05:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:17.115 16:05:46 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:17.115 16:05:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:17.115 16:05:46 -- common/autotest_common.sh@10 -- # set +x 00:13:17.115 [2024-04-15 16:05:46.862250] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:17.115 16:05:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:17.115 16:05:46 -- target/queue_depth.sh@30 -- # bdevperf_pid=79360 00:13:17.115 16:05:46 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:17.115 16:05:46 -- target/queue_depth.sh@33 -- # waitforlisten 79360 /var/tmp/bdevperf.sock 00:13:17.115 16:05:46 -- common/autotest_common.sh@817 -- # '[' -z 79360 ']' 00:13:17.115 16:05:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:17.115 16:05:46 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:13:17.115 16:05:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:17.115 16:05:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:17.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:17.115 16:05:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:17.115 16:05:46 -- common/autotest_common.sh@10 -- # set +x 00:13:17.115 [2024-04-15 16:05:46.904711] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:13:17.115 [2024-04-15 16:05:46.905304] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79360 ] 00:13:17.115 [2024-04-15 16:05:47.036919] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:17.373 [2024-04-15 16:05:47.105110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.940 16:05:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:17.940 16:05:47 -- common/autotest_common.sh@850 -- # return 0 00:13:17.940 16:05:47 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:13:17.940 16:05:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:17.940 16:05:47 -- common/autotest_common.sh@10 -- # set +x 00:13:18.199 NVMe0n1 00:13:18.199 16:05:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:18.199 16:05:47 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:18.199 Running I/O for 10 seconds... 00:13:28.166 00:13:28.166 Latency(us) 00:13:28.166 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:28.166 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:13:28.166 Verification LBA range: start 0x0 length 0x4000 00:13:28.166 NVMe0n1 : 10.07 8902.17 34.77 0.00 0.00 114477.76 19972.88 80390.83 00:13:28.166 =================================================================================================================== 00:13:28.166 Total : 8902.17 34.77 0.00 0.00 114477.76 19972.88 80390.83 00:13:28.166 0 00:13:28.166 16:05:58 -- target/queue_depth.sh@39 -- # killprocess 79360 00:13:28.166 16:05:58 -- common/autotest_common.sh@936 -- # '[' -z 79360 ']' 00:13:28.166 16:05:58 -- common/autotest_common.sh@940 -- # kill -0 79360 00:13:28.166 16:05:58 -- common/autotest_common.sh@941 -- # uname 00:13:28.166 16:05:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:28.166 16:05:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79360 00:13:28.424 killing process with pid 79360 00:13:28.424 Received shutdown signal, test time was about 10.000000 seconds 00:13:28.424 00:13:28.425 Latency(us) 00:13:28.425 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:28.425 =================================================================================================================== 00:13:28.425 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:28.425 16:05:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:28.425 16:05:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:28.425 16:05:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79360' 00:13:28.425 16:05:58 -- common/autotest_common.sh@955 -- # kill 79360 00:13:28.425 16:05:58 -- common/autotest_common.sh@960 -- # wait 79360 00:13:28.425 16:05:58 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:28.425 16:05:58 -- target/queue_depth.sh@43 -- # nvmftestfini 00:13:28.425 16:05:58 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:28.425 16:05:58 -- nvmf/common.sh@117 -- # sync 00:13:28.425 16:05:58 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:28.425 16:05:58 -- nvmf/common.sh@120 -- # set +e 00:13:28.425 16:05:58 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:28.425 16:05:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:28.425 rmmod nvme_tcp 00:13:28.683 rmmod nvme_fabrics 00:13:28.683 16:05:58 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:28.683 16:05:58 -- nvmf/common.sh@124 -- # set -e 00:13:28.683 16:05:58 -- nvmf/common.sh@125 -- # return 0 00:13:28.683 16:05:58 -- nvmf/common.sh@478 -- # '[' -n 79335 ']' 00:13:28.683 16:05:58 -- nvmf/common.sh@479 -- # killprocess 79335 00:13:28.683 16:05:58 -- common/autotest_common.sh@936 -- # '[' -z 79335 ']' 00:13:28.683 16:05:58 -- common/autotest_common.sh@940 -- # kill -0 79335 00:13:28.683 16:05:58 -- common/autotest_common.sh@941 -- # uname 00:13:28.683 16:05:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:28.683 16:05:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79335 00:13:28.683 16:05:58 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:28.683 16:05:58 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:28.683 16:05:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79335' 00:13:28.683 killing process with pid 79335 00:13:28.683 16:05:58 -- common/autotest_common.sh@955 -- # kill 79335 00:13:28.683 16:05:58 -- common/autotest_common.sh@960 -- # wait 79335 00:13:28.942 16:05:58 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:28.942 16:05:58 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:28.942 16:05:58 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:28.942 16:05:58 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:28.942 16:05:58 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:28.942 16:05:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:28.942 16:05:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:28.942 16:05:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:28.942 16:05:58 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:28.942 00:13:28.942 real 0m12.765s 00:13:28.942 user 0m22.224s 00:13:28.942 sys 0m2.339s 00:13:28.942 16:05:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:28.942 ************************************ 00:13:28.942 END TEST nvmf_queue_depth 00:13:28.942 ************************************ 00:13:28.942 16:05:58 -- common/autotest_common.sh@10 -- # set +x 00:13:28.942 16:05:58 -- nvmf/nvmf.sh@52 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:13:28.942 16:05:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:28.943 16:05:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:28.943 16:05:58 -- common/autotest_common.sh@10 -- # set +x 00:13:28.943 ************************************ 00:13:28.943 START TEST nvmf_multipath 00:13:28.943 ************************************ 00:13:28.943 16:05:58 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:13:28.943 * Looking for test storage... 00:13:29.201 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:29.201 16:05:58 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:29.201 16:05:58 -- nvmf/common.sh@7 -- # uname -s 00:13:29.201 16:05:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:29.201 16:05:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:29.201 16:05:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:29.201 16:05:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:29.201 16:05:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:29.201 16:05:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:29.201 16:05:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:29.201 16:05:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:29.201 16:05:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:29.201 16:05:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:29.201 16:05:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5cf30adc-e0ee-4255-9853-178d7d30983a 00:13:29.201 16:05:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=5cf30adc-e0ee-4255-9853-178d7d30983a 00:13:29.201 16:05:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:29.201 16:05:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:29.201 16:05:58 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:29.201 16:05:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:29.201 16:05:58 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:29.201 16:05:58 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:29.201 16:05:58 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:29.201 16:05:58 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:29.201 16:05:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.201 16:05:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.201 16:05:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.201 16:05:58 -- paths/export.sh@5 -- # export PATH 00:13:29.201 16:05:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.201 16:05:58 -- nvmf/common.sh@47 -- # : 0 00:13:29.201 16:05:58 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:29.201 16:05:58 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:29.202 16:05:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:29.202 16:05:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:29.202 16:05:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:29.202 16:05:58 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:29.202 16:05:58 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:29.202 16:05:58 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:29.202 16:05:58 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:29.202 16:05:58 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:29.202 16:05:58 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:13:29.202 16:05:58 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:29.202 16:05:58 -- target/multipath.sh@43 -- # nvmftestinit 00:13:29.202 16:05:58 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:29.202 16:05:58 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:29.202 16:05:58 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:29.202 16:05:58 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:29.202 16:05:58 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:29.202 16:05:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:29.202 16:05:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:29.202 16:05:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:29.202 16:05:58 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:13:29.202 16:05:58 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:13:29.202 16:05:58 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:13:29.202 16:05:58 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:13:29.202 16:05:58 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:13:29.202 16:05:58 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:13:29.202 16:05:58 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:29.202 16:05:58 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:29.202 16:05:58 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:29.202 16:05:58 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:29.202 16:05:58 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:29.202 16:05:58 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:29.202 16:05:58 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:29.202 16:05:58 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:29.202 16:05:58 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:29.202 16:05:58 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:29.202 16:05:58 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:29.202 16:05:58 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:29.202 16:05:58 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:29.202 16:05:58 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:29.202 Cannot find device "nvmf_tgt_br" 00:13:29.202 16:05:58 -- nvmf/common.sh@155 -- # true 00:13:29.202 16:05:58 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:29.202 Cannot find device "nvmf_tgt_br2" 00:13:29.202 16:05:59 -- nvmf/common.sh@156 -- # true 00:13:29.202 16:05:59 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:29.202 16:05:59 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:29.202 Cannot find device "nvmf_tgt_br" 00:13:29.202 16:05:59 -- nvmf/common.sh@158 -- # true 00:13:29.202 16:05:59 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:29.202 Cannot find device "nvmf_tgt_br2" 00:13:29.202 16:05:59 -- nvmf/common.sh@159 -- # true 00:13:29.202 16:05:59 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:29.202 16:05:59 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:29.202 16:05:59 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:29.202 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:29.202 16:05:59 -- nvmf/common.sh@162 -- # true 00:13:29.202 16:05:59 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:29.202 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:29.202 16:05:59 -- nvmf/common.sh@163 -- # true 00:13:29.202 16:05:59 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:29.202 16:05:59 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:29.202 16:05:59 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:29.202 16:05:59 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:29.202 16:05:59 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:29.460 16:05:59 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:29.460 16:05:59 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:29.460 16:05:59 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:29.460 16:05:59 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:29.460 16:05:59 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:29.460 16:05:59 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:29.460 16:05:59 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:29.460 16:05:59 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:29.460 16:05:59 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:29.460 16:05:59 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:29.460 16:05:59 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:29.460 16:05:59 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:29.460 16:05:59 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:29.460 16:05:59 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:29.460 16:05:59 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:29.460 16:05:59 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:29.460 16:05:59 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:29.460 16:05:59 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:29.460 16:05:59 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:29.460 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:29.460 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:13:29.460 00:13:29.460 --- 10.0.0.2 ping statistics --- 00:13:29.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.461 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:13:29.461 16:05:59 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:29.461 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:29.461 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:13:29.461 00:13:29.461 --- 10.0.0.3 ping statistics --- 00:13:29.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.461 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:13:29.461 16:05:59 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:29.461 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:29.461 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:13:29.461 00:13:29.461 --- 10.0.0.1 ping statistics --- 00:13:29.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.461 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:13:29.461 16:05:59 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:29.461 16:05:59 -- nvmf/common.sh@422 -- # return 0 00:13:29.461 16:05:59 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:29.461 16:05:59 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:29.461 16:05:59 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:29.461 16:05:59 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:29.461 16:05:59 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:29.461 16:05:59 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:29.461 16:05:59 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:29.461 16:05:59 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:13:29.461 16:05:59 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:13:29.461 16:05:59 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:13:29.461 16:05:59 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:29.461 16:05:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:29.461 16:05:59 -- common/autotest_common.sh@10 -- # set +x 00:13:29.461 16:05:59 -- nvmf/common.sh@470 -- # nvmfpid=79685 00:13:29.461 16:05:59 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:29.461 16:05:59 -- nvmf/common.sh@471 -- # waitforlisten 79685 00:13:29.461 16:05:59 -- common/autotest_common.sh@817 -- # '[' -z 79685 ']' 00:13:29.461 16:05:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.461 16:05:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:29.461 16:05:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:29.461 16:05:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:29.461 16:05:59 -- common/autotest_common.sh@10 -- # set +x 00:13:29.720 [2024-04-15 16:05:59.449373] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:13:29.720 [2024-04-15 16:05:59.449778] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:29.720 [2024-04-15 16:05:59.611885] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:29.720 [2024-04-15 16:05:59.670396] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:29.720 [2024-04-15 16:05:59.670729] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:29.720 [2024-04-15 16:05:59.670893] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:29.720 [2024-04-15 16:05:59.671065] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:29.720 [2024-04-15 16:05:59.671175] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:29.720 [2024-04-15 16:05:59.671360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:29.720 [2024-04-15 16:05:59.671592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.720 [2024-04-15 16:05:59.671501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:29.720 [2024-04-15 16:05:59.671563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:30.654 16:06:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:30.654 16:06:00 -- common/autotest_common.sh@850 -- # return 0 00:13:30.654 16:06:00 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:30.654 16:06:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:30.654 16:06:00 -- common/autotest_common.sh@10 -- # set +x 00:13:30.654 16:06:00 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:30.654 16:06:00 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:30.654 [2024-04-15 16:06:00.612007] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:30.911 16:06:00 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:13:31.169 Malloc0 00:13:31.169 16:06:00 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:13:31.427 16:06:01 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:31.685 16:06:01 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:31.943 [2024-04-15 16:06:01.699447] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:31.943 16:06:01 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:32.200 [2024-04-15 16:06:01.959721] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:32.200 16:06:01 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5cf30adc-e0ee-4255-9853-178d7d30983a --hostid=5cf30adc-e0ee-4255-9853-178d7d30983a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:13:32.200 16:06:02 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5cf30adc-e0ee-4255-9853-178d7d30983a --hostid=5cf30adc-e0ee-4255-9853-178d7d30983a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:13:32.462 16:06:02 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:13:32.462 16:06:02 -- common/autotest_common.sh@1184 -- # local i=0 00:13:32.462 16:06:02 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:32.462 16:06:02 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:13:32.462 16:06:02 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:34.363 16:06:04 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:34.363 16:06:04 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:34.363 16:06:04 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:34.363 16:06:04 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:13:34.363 16:06:04 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:34.363 16:06:04 -- common/autotest_common.sh@1194 -- # return 0 00:13:34.363 16:06:04 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:13:34.363 16:06:04 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:13:34.363 16:06:04 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:13:34.363 16:06:04 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:13:34.363 16:06:04 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:13:34.363 16:06:04 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:13:34.363 16:06:04 -- target/multipath.sh@38 -- # return 0 00:13:34.363 16:06:04 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:13:34.363 16:06:04 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:13:34.363 16:06:04 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:13:34.363 16:06:04 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:13:34.363 16:06:04 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:13:34.363 16:06:04 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:13:34.363 16:06:04 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:13:34.363 16:06:04 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:13:34.363 16:06:04 -- target/multipath.sh@22 -- # local timeout=20 00:13:34.363 16:06:04 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:13:34.363 16:06:04 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:13:34.363 16:06:04 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:13:34.363 16:06:04 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:13:34.363 16:06:04 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:13:34.363 16:06:04 -- target/multipath.sh@22 -- # local timeout=20 00:13:34.363 16:06:04 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:13:34.363 16:06:04 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:13:34.363 16:06:04 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:13:34.363 16:06:04 -- target/multipath.sh@85 -- # echo numa 00:13:34.363 16:06:04 -- target/multipath.sh@88 -- # fio_pid=79775 00:13:34.363 16:06:04 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:13:34.363 16:06:04 -- target/multipath.sh@90 -- # sleep 1 00:13:34.363 [global] 00:13:34.363 thread=1 00:13:34.363 invalidate=1 00:13:34.363 rw=randrw 00:13:34.363 time_based=1 00:13:34.363 runtime=6 00:13:34.363 ioengine=libaio 00:13:34.363 direct=1 00:13:34.363 bs=4096 00:13:34.363 iodepth=128 00:13:34.363 norandommap=0 00:13:34.363 numjobs=1 00:13:34.363 00:13:34.363 verify_dump=1 00:13:34.363 verify_backlog=512 00:13:34.363 verify_state_save=0 00:13:34.363 do_verify=1 00:13:34.363 verify=crc32c-intel 00:13:34.363 [job0] 00:13:34.363 filename=/dev/nvme0n1 00:13:34.363 Could not set queue depth (nvme0n1) 00:13:34.623 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:34.623 fio-3.35 00:13:34.623 Starting 1 thread 00:13:35.557 16:06:05 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:13:35.815 16:06:05 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:13:36.073 16:06:05 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:13:36.073 16:06:05 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:13:36.073 16:06:05 -- target/multipath.sh@22 -- # local timeout=20 00:13:36.073 16:06:05 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:13:36.073 16:06:05 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:13:36.073 16:06:05 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:13:36.073 16:06:05 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:13:36.073 16:06:05 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:13:36.073 16:06:05 -- target/multipath.sh@22 -- # local timeout=20 00:13:36.073 16:06:05 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:13:36.073 16:06:05 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:13:36.073 16:06:05 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:13:36.073 16:06:05 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:13:36.333 16:06:06 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:13:36.603 16:06:06 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:13:36.603 16:06:06 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:13:36.603 16:06:06 -- target/multipath.sh@22 -- # local timeout=20 00:13:36.603 16:06:06 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:13:36.603 16:06:06 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:13:36.603 16:06:06 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:13:36.603 16:06:06 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:13:36.603 16:06:06 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:13:36.603 16:06:06 -- target/multipath.sh@22 -- # local timeout=20 00:13:36.603 16:06:06 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:13:36.603 16:06:06 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:13:36.603 16:06:06 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:13:36.603 16:06:06 -- target/multipath.sh@104 -- # wait 79775 00:13:40.790 00:13:40.790 job0: (groupid=0, jobs=1): err= 0: pid=79796: Mon Apr 15 16:06:10 2024 00:13:40.790 read: IOPS=10.6k, BW=41.3MiB/s (43.3MB/s)(248MiB/6006msec) 00:13:40.790 slat (usec): min=6, max=7176, avg=56.21, stdev=221.87 00:13:40.790 clat (usec): min=1562, max=16679, avg=8206.80, stdev=1503.49 00:13:40.790 lat (usec): min=1574, max=16698, avg=8263.01, stdev=1507.74 00:13:40.790 clat percentiles (usec): 00:13:40.790 | 1.00th=[ 4359], 5.00th=[ 6259], 10.00th=[ 6980], 20.00th=[ 7439], 00:13:40.790 | 30.00th=[ 7635], 40.00th=[ 7767], 50.00th=[ 7963], 60.00th=[ 8160], 00:13:40.790 | 70.00th=[ 8455], 80.00th=[ 8717], 90.00th=[ 9896], 95.00th=[11600], 00:13:40.790 | 99.00th=[12911], 99.50th=[13304], 99.90th=[15270], 99.95th=[15270], 00:13:40.790 | 99.99th=[16581] 00:13:40.790 bw ( KiB/s): min=11544, max=26496, per=52.75%, avg=22296.73, stdev=4389.70, samples=11 00:13:40.790 iops : min= 2886, max= 6624, avg=5574.18, stdev=1097.42, samples=11 00:13:40.790 write: IOPS=6141, BW=24.0MiB/s (25.2MB/s)(132MiB/5496msec); 0 zone resets 00:13:40.790 slat (usec): min=7, max=4993, avg=62.06, stdev=169.52 00:13:40.790 clat (usec): min=2314, max=16292, avg=7142.52, stdev=1330.51 00:13:40.790 lat (usec): min=2335, max=16311, avg=7204.59, stdev=1334.47 00:13:40.790 clat percentiles (usec): 00:13:40.790 | 1.00th=[ 3359], 5.00th=[ 4293], 10.00th=[ 5473], 20.00th=[ 6587], 00:13:40.790 | 30.00th=[ 6849], 40.00th=[ 7046], 50.00th=[ 7242], 60.00th=[ 7439], 00:13:40.790 | 70.00th=[ 7635], 80.00th=[ 7832], 90.00th=[ 8225], 95.00th=[ 8979], 00:13:40.790 | 99.00th=[11207], 99.50th=[11863], 99.90th=[13042], 99.95th=[13435], 00:13:40.790 | 99.99th=[14615] 00:13:40.790 bw ( KiB/s): min=12056, max=25808, per=90.92%, avg=22335.27, stdev=4099.20, samples=11 00:13:40.790 iops : min= 3014, max= 6452, avg=5583.82, stdev=1024.80, samples=11 00:13:40.790 lat (msec) : 2=0.02%, 4=1.49%, 10=91.23%, 20=7.26% 00:13:40.790 cpu : usr=5.58%, sys=19.93%, ctx=5640, majf=0, minf=96 00:13:40.790 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:13:40.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:40.790 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:40.790 issued rwts: total=63459,33752,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:40.790 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:40.790 00:13:40.790 Run status group 0 (all jobs): 00:13:40.790 READ: bw=41.3MiB/s (43.3MB/s), 41.3MiB/s-41.3MiB/s (43.3MB/s-43.3MB/s), io=248MiB (260MB), run=6006-6006msec 00:13:40.790 WRITE: bw=24.0MiB/s (25.2MB/s), 24.0MiB/s-24.0MiB/s (25.2MB/s-25.2MB/s), io=132MiB (138MB), run=5496-5496msec 00:13:40.790 00:13:40.790 Disk stats (read/write): 00:13:40.790 nvme0n1: ios=62519/33109, merge=0/0, ticks=493024/223216, in_queue=716240, util=98.67% 00:13:40.790 16:06:10 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:13:41.048 16:06:10 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:13:41.306 16:06:11 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:13:41.306 16:06:11 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:13:41.306 16:06:11 -- target/multipath.sh@22 -- # local timeout=20 00:13:41.306 16:06:11 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:13:41.306 16:06:11 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:13:41.306 16:06:11 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:13:41.306 16:06:11 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:13:41.306 16:06:11 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:13:41.306 16:06:11 -- target/multipath.sh@22 -- # local timeout=20 00:13:41.306 16:06:11 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:13:41.306 16:06:11 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:13:41.306 16:06:11 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:13:41.306 16:06:11 -- target/multipath.sh@113 -- # echo round-robin 00:13:41.306 16:06:11 -- target/multipath.sh@116 -- # fio_pid=79870 00:13:41.307 16:06:11 -- target/multipath.sh@118 -- # sleep 1 00:13:41.307 16:06:11 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:13:41.307 [global] 00:13:41.307 thread=1 00:13:41.307 invalidate=1 00:13:41.307 rw=randrw 00:13:41.307 time_based=1 00:13:41.307 runtime=6 00:13:41.307 ioengine=libaio 00:13:41.307 direct=1 00:13:41.307 bs=4096 00:13:41.307 iodepth=128 00:13:41.307 norandommap=0 00:13:41.307 numjobs=1 00:13:41.307 00:13:41.307 verify_dump=1 00:13:41.307 verify_backlog=512 00:13:41.307 verify_state_save=0 00:13:41.307 do_verify=1 00:13:41.307 verify=crc32c-intel 00:13:41.307 [job0] 00:13:41.307 filename=/dev/nvme0n1 00:13:41.307 Could not set queue depth (nvme0n1) 00:13:41.565 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:41.565 fio-3.35 00:13:41.565 Starting 1 thread 00:13:42.500 16:06:12 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:13:42.500 16:06:12 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:13:42.758 16:06:12 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:13:42.758 16:06:12 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:13:42.758 16:06:12 -- target/multipath.sh@22 -- # local timeout=20 00:13:42.758 16:06:12 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:13:42.758 16:06:12 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:13:42.758 16:06:12 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:13:42.758 16:06:12 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:13:42.758 16:06:12 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:13:42.758 16:06:12 -- target/multipath.sh@22 -- # local timeout=20 00:13:42.758 16:06:12 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:13:42.758 16:06:12 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:13:42.758 16:06:12 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:13:42.758 16:06:12 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:13:43.026 16:06:12 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:13:43.284 16:06:13 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:13:43.284 16:06:13 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:13:43.284 16:06:13 -- target/multipath.sh@22 -- # local timeout=20 00:13:43.284 16:06:13 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:13:43.284 16:06:13 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:13:43.284 16:06:13 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:13:43.284 16:06:13 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:13:43.284 16:06:13 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:13:43.284 16:06:13 -- target/multipath.sh@22 -- # local timeout=20 00:13:43.284 16:06:13 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:13:43.284 16:06:13 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:13:43.284 16:06:13 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:13:43.284 16:06:13 -- target/multipath.sh@132 -- # wait 79870 00:13:48.549 00:13:48.549 job0: (groupid=0, jobs=1): err= 0: pid=79895: Mon Apr 15 16:06:17 2024 00:13:48.549 read: IOPS=11.4k, BW=44.6MiB/s (46.8MB/s)(268MiB/6006msec) 00:13:48.549 slat (usec): min=6, max=6280, avg=45.37, stdev=192.23 00:13:48.549 clat (usec): min=283, max=24332, avg=7691.53, stdev=2191.50 00:13:48.549 lat (usec): min=293, max=24344, avg=7736.90, stdev=2204.17 00:13:48.549 clat percentiles (usec): 00:13:48.549 | 1.00th=[ 1909], 5.00th=[ 3818], 10.00th=[ 5014], 20.00th=[ 6521], 00:13:48.549 | 30.00th=[ 7242], 40.00th=[ 7570], 50.00th=[ 7767], 60.00th=[ 7963], 00:13:48.549 | 70.00th=[ 8225], 80.00th=[ 8717], 90.00th=[ 9896], 95.00th=[11600], 00:13:48.549 | 99.00th=[14222], 99.50th=[16450], 99.90th=[21365], 99.95th=[22414], 00:13:48.549 | 99.99th=[23462] 00:13:48.549 bw ( KiB/s): min=12104, max=36328, per=51.99%, avg=23736.00, stdev=7149.91, samples=11 00:13:48.549 iops : min= 3026, max= 9082, avg=5934.00, stdev=1787.48, samples=11 00:13:48.549 write: IOPS=6694, BW=26.1MiB/s (27.4MB/s)(141MiB/5378msec); 0 zone resets 00:13:48.549 slat (usec): min=6, max=3187, avg=50.64, stdev=148.78 00:13:48.549 clat (usec): min=250, max=21939, avg=6534.23, stdev=2126.52 00:13:48.549 lat (usec): min=278, max=21961, avg=6584.86, stdev=2141.58 00:13:48.549 clat percentiles (usec): 00:13:48.549 | 1.00th=[ 1680], 5.00th=[ 3163], 10.00th=[ 3851], 20.00th=[ 4686], 00:13:48.550 | 30.00th=[ 5538], 40.00th=[ 6587], 50.00th=[ 6980], 60.00th=[ 7242], 00:13:48.550 | 70.00th=[ 7504], 80.00th=[ 7767], 90.00th=[ 8225], 95.00th=[ 9241], 00:13:48.550 | 99.00th=[12649], 99.50th=[17957], 99.90th=[19792], 99.95th=[20317], 00:13:48.550 | 99.99th=[21890] 00:13:48.550 bw ( KiB/s): min=12768, max=35264, per=88.72%, avg=23757.09, stdev=6932.50, samples=11 00:13:48.550 iops : min= 3192, max= 8816, avg=5939.27, stdev=1733.12, samples=11 00:13:48.550 lat (usec) : 500=0.05%, 750=0.07%, 1000=0.13% 00:13:48.550 lat (msec) : 2=0.96%, 4=6.50%, 10=84.71%, 20=7.41%, 50=0.17% 00:13:48.550 cpu : usr=5.61%, sys=20.45%, ctx=6387, majf=0, minf=96 00:13:48.550 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:13:48.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.550 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:48.550 issued rwts: total=68553,36002,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:48.550 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:48.550 00:13:48.550 Run status group 0 (all jobs): 00:13:48.550 READ: bw=44.6MiB/s (46.8MB/s), 44.6MiB/s-44.6MiB/s (46.8MB/s-46.8MB/s), io=268MiB (281MB), run=6006-6006msec 00:13:48.550 WRITE: bw=26.1MiB/s (27.4MB/s), 26.1MiB/s-26.1MiB/s (27.4MB/s-27.4MB/s), io=141MiB (147MB), run=5378-5378msec 00:13:48.550 00:13:48.550 Disk stats (read/write): 00:13:48.550 nvme0n1: ios=67619/35360, merge=0/0, ticks=499773/217492, in_queue=717265, util=98.65% 00:13:48.550 16:06:17 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:48.550 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:48.550 16:06:17 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:48.550 16:06:17 -- common/autotest_common.sh@1205 -- # local i=0 00:13:48.550 16:06:17 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:48.550 16:06:17 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:13:48.550 16:06:17 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:13:48.550 16:06:17 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:48.550 16:06:17 -- common/autotest_common.sh@1217 -- # return 0 00:13:48.550 16:06:17 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:48.550 16:06:17 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:13:48.550 16:06:17 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:13:48.550 16:06:17 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:13:48.550 16:06:17 -- target/multipath.sh@144 -- # nvmftestfini 00:13:48.550 16:06:17 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:48.550 16:06:17 -- nvmf/common.sh@117 -- # sync 00:13:48.550 16:06:17 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:48.550 16:06:17 -- nvmf/common.sh@120 -- # set +e 00:13:48.550 16:06:17 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:48.550 16:06:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:48.550 rmmod nvme_tcp 00:13:48.550 rmmod nvme_fabrics 00:13:48.550 16:06:17 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:48.550 16:06:17 -- nvmf/common.sh@124 -- # set -e 00:13:48.550 16:06:17 -- nvmf/common.sh@125 -- # return 0 00:13:48.550 16:06:17 -- nvmf/common.sh@478 -- # '[' -n 79685 ']' 00:13:48.550 16:06:17 -- nvmf/common.sh@479 -- # killprocess 79685 00:13:48.550 16:06:17 -- common/autotest_common.sh@936 -- # '[' -z 79685 ']' 00:13:48.550 16:06:17 -- common/autotest_common.sh@940 -- # kill -0 79685 00:13:48.550 16:06:17 -- common/autotest_common.sh@941 -- # uname 00:13:48.550 16:06:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:48.550 16:06:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79685 00:13:48.550 killing process with pid 79685 00:13:48.550 16:06:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:48.550 16:06:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:48.550 16:06:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79685' 00:13:48.550 16:06:17 -- common/autotest_common.sh@955 -- # kill 79685 00:13:48.550 16:06:17 -- common/autotest_common.sh@960 -- # wait 79685 00:13:48.550 16:06:18 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:48.550 16:06:18 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:48.550 16:06:18 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:48.550 16:06:18 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:48.550 16:06:18 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:48.550 16:06:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.550 16:06:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:48.550 16:06:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.550 16:06:18 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:48.550 ************************************ 00:13:48.550 END TEST nvmf_multipath 00:13:48.550 ************************************ 00:13:48.550 00:13:48.550 real 0m19.430s 00:13:48.550 user 1m11.052s 00:13:48.550 sys 0m10.715s 00:13:48.550 16:06:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:48.550 16:06:18 -- common/autotest_common.sh@10 -- # set +x 00:13:48.550 16:06:18 -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:48.550 16:06:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:48.550 16:06:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:48.550 16:06:18 -- common/autotest_common.sh@10 -- # set +x 00:13:48.550 ************************************ 00:13:48.550 START TEST nvmf_zcopy 00:13:48.550 ************************************ 00:13:48.550 16:06:18 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:48.550 * Looking for test storage... 00:13:48.550 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:48.550 16:06:18 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:48.550 16:06:18 -- nvmf/common.sh@7 -- # uname -s 00:13:48.550 16:06:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:48.550 16:06:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:48.550 16:06:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:48.550 16:06:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:48.550 16:06:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:48.550 16:06:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:48.550 16:06:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:48.550 16:06:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:48.550 16:06:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:48.550 16:06:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:48.550 16:06:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5cf30adc-e0ee-4255-9853-178d7d30983a 00:13:48.550 16:06:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=5cf30adc-e0ee-4255-9853-178d7d30983a 00:13:48.550 16:06:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:48.550 16:06:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:48.550 16:06:18 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:48.550 16:06:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:48.550 16:06:18 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:48.550 16:06:18 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:48.550 16:06:18 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:48.550 16:06:18 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:48.550 16:06:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.550 16:06:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.550 16:06:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.550 16:06:18 -- paths/export.sh@5 -- # export PATH 00:13:48.551 16:06:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.551 16:06:18 -- nvmf/common.sh@47 -- # : 0 00:13:48.551 16:06:18 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:48.551 16:06:18 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:48.551 16:06:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:48.551 16:06:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:48.551 16:06:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:48.551 16:06:18 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:48.551 16:06:18 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:48.551 16:06:18 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:48.551 16:06:18 -- target/zcopy.sh@12 -- # nvmftestinit 00:13:48.551 16:06:18 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:48.551 16:06:18 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:48.551 16:06:18 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:48.551 16:06:18 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:48.551 16:06:18 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:48.551 16:06:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.551 16:06:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:48.551 16:06:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.551 16:06:18 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:13:48.551 16:06:18 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:13:48.551 16:06:18 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:13:48.551 16:06:18 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:13:48.551 16:06:18 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:13:48.551 16:06:18 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:13:48.551 16:06:18 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:48.551 16:06:18 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:48.551 16:06:18 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:48.551 16:06:18 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:48.551 16:06:18 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:48.551 16:06:18 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:48.551 16:06:18 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:48.551 16:06:18 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:48.551 16:06:18 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:48.551 16:06:18 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:48.551 16:06:18 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:48.551 16:06:18 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:48.551 16:06:18 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:48.809 16:06:18 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:48.809 Cannot find device "nvmf_tgt_br" 00:13:48.809 16:06:18 -- nvmf/common.sh@155 -- # true 00:13:48.809 16:06:18 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:48.809 Cannot find device "nvmf_tgt_br2" 00:13:48.809 16:06:18 -- nvmf/common.sh@156 -- # true 00:13:48.809 16:06:18 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:48.809 16:06:18 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:48.809 Cannot find device "nvmf_tgt_br" 00:13:48.809 16:06:18 -- nvmf/common.sh@158 -- # true 00:13:48.809 16:06:18 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:48.809 Cannot find device "nvmf_tgt_br2" 00:13:48.809 16:06:18 -- nvmf/common.sh@159 -- # true 00:13:48.809 16:06:18 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:48.809 16:06:18 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:48.809 16:06:18 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:48.809 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:48.809 16:06:18 -- nvmf/common.sh@162 -- # true 00:13:48.809 16:06:18 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:48.809 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:48.809 16:06:18 -- nvmf/common.sh@163 -- # true 00:13:48.809 16:06:18 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:48.809 16:06:18 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:48.809 16:06:18 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:48.809 16:06:18 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:48.809 16:06:18 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:48.809 16:06:18 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:48.809 16:06:18 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:48.809 16:06:18 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:48.809 16:06:18 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:48.809 16:06:18 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:48.809 16:06:18 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:49.068 16:06:18 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:49.068 16:06:18 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:49.068 16:06:18 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:49.068 16:06:18 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:49.068 16:06:18 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:49.068 16:06:18 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:49.068 16:06:18 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:49.068 16:06:18 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:49.068 16:06:18 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:49.068 16:06:18 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:49.068 16:06:18 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:49.068 16:06:18 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:49.068 16:06:18 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:49.068 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:49.068 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:13:49.068 00:13:49.068 --- 10.0.0.2 ping statistics --- 00:13:49.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.068 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:13:49.068 16:06:18 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:49.068 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:49.068 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:13:49.068 00:13:49.068 --- 10.0.0.3 ping statistics --- 00:13:49.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.068 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:13:49.068 16:06:18 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:49.068 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:49.068 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:13:49.068 00:13:49.068 --- 10.0.0.1 ping statistics --- 00:13:49.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.068 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:13:49.068 16:06:18 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:49.068 16:06:18 -- nvmf/common.sh@422 -- # return 0 00:13:49.068 16:06:18 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:49.068 16:06:18 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:49.068 16:06:18 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:49.068 16:06:18 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:49.068 16:06:18 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:49.068 16:06:18 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:49.068 16:06:18 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:49.068 16:06:18 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:13:49.068 16:06:18 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:49.068 16:06:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:49.068 16:06:18 -- common/autotest_common.sh@10 -- # set +x 00:13:49.068 16:06:18 -- nvmf/common.sh@470 -- # nvmfpid=80153 00:13:49.068 16:06:18 -- nvmf/common.sh@471 -- # waitforlisten 80153 00:13:49.068 16:06:18 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:49.068 16:06:18 -- common/autotest_common.sh@817 -- # '[' -z 80153 ']' 00:13:49.068 16:06:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.068 16:06:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:49.068 16:06:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.068 16:06:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:49.068 16:06:18 -- common/autotest_common.sh@10 -- # set +x 00:13:49.068 [2024-04-15 16:06:18.964905] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:13:49.068 [2024-04-15 16:06:18.965547] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:49.327 [2024-04-15 16:06:19.102916] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:49.327 [2024-04-15 16:06:19.152697] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:49.327 [2024-04-15 16:06:19.152975] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:49.327 [2024-04-15 16:06:19.153126] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:49.327 [2024-04-15 16:06:19.153245] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:49.327 [2024-04-15 16:06:19.153280] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:49.327 [2024-04-15 16:06:19.153383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:49.953 16:06:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:49.953 16:06:19 -- common/autotest_common.sh@850 -- # return 0 00:13:49.953 16:06:19 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:49.953 16:06:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:49.953 16:06:19 -- common/autotest_common.sh@10 -- # set +x 00:13:50.212 16:06:19 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:50.212 16:06:19 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:13:50.212 16:06:19 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:13:50.212 16:06:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:50.212 16:06:19 -- common/autotest_common.sh@10 -- # set +x 00:13:50.212 [2024-04-15 16:06:19.959810] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:50.212 16:06:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:50.212 16:06:19 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:50.212 16:06:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:50.212 16:06:19 -- common/autotest_common.sh@10 -- # set +x 00:13:50.212 16:06:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:50.212 16:06:19 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:50.212 16:06:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:50.212 16:06:19 -- common/autotest_common.sh@10 -- # set +x 00:13:50.212 [2024-04-15 16:06:19.983960] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:50.212 16:06:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:50.212 16:06:19 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:50.212 16:06:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:50.212 16:06:19 -- common/autotest_common.sh@10 -- # set +x 00:13:50.212 16:06:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:50.212 16:06:20 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:13:50.212 16:06:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:50.212 16:06:20 -- common/autotest_common.sh@10 -- # set +x 00:13:50.212 malloc0 00:13:50.212 16:06:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:50.212 16:06:20 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:50.212 16:06:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:50.212 16:06:20 -- common/autotest_common.sh@10 -- # set +x 00:13:50.212 16:06:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:50.212 16:06:20 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:13:50.212 16:06:20 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:13:50.212 16:06:20 -- nvmf/common.sh@521 -- # config=() 00:13:50.212 16:06:20 -- nvmf/common.sh@521 -- # local subsystem config 00:13:50.212 16:06:20 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:13:50.212 16:06:20 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:13:50.212 { 00:13:50.212 "params": { 00:13:50.212 "name": "Nvme$subsystem", 00:13:50.212 "trtype": "$TEST_TRANSPORT", 00:13:50.212 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:50.212 "adrfam": "ipv4", 00:13:50.212 "trsvcid": "$NVMF_PORT", 00:13:50.212 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:50.212 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:50.212 "hdgst": ${hdgst:-false}, 00:13:50.212 "ddgst": ${ddgst:-false} 00:13:50.212 }, 00:13:50.212 "method": "bdev_nvme_attach_controller" 00:13:50.213 } 00:13:50.213 EOF 00:13:50.213 )") 00:13:50.213 16:06:20 -- nvmf/common.sh@543 -- # cat 00:13:50.213 16:06:20 -- nvmf/common.sh@545 -- # jq . 00:13:50.213 16:06:20 -- nvmf/common.sh@546 -- # IFS=, 00:13:50.213 16:06:20 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:13:50.213 "params": { 00:13:50.213 "name": "Nvme1", 00:13:50.213 "trtype": "tcp", 00:13:50.213 "traddr": "10.0.0.2", 00:13:50.213 "adrfam": "ipv4", 00:13:50.213 "trsvcid": "4420", 00:13:50.213 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:50.213 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:50.213 "hdgst": false, 00:13:50.213 "ddgst": false 00:13:50.213 }, 00:13:50.213 "method": "bdev_nvme_attach_controller" 00:13:50.213 }' 00:13:50.213 [2024-04-15 16:06:20.077438] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:13:50.213 [2024-04-15 16:06:20.077709] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80186 ] 00:13:50.471 [2024-04-15 16:06:20.217491] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.471 [2024-04-15 16:06:20.271130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.471 [2024-04-15 16:06:20.280371] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:13:50.471 Running I/O for 10 seconds... 00:14:02.688 00:14:02.688 Latency(us) 00:14:02.688 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:02.688 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:14:02.688 Verification LBA range: start 0x0 length 0x1000 00:14:02.688 Nvme1n1 : 10.01 6961.12 54.38 0.00 0.00 18335.82 3073.95 30084.14 00:14:02.688 =================================================================================================================== 00:14:02.688 Total : 6961.12 54.38 0.00 0.00 18335.82 3073.95 30084.14 00:14:02.688 16:06:30 -- target/zcopy.sh@39 -- # perfpid=80298 00:14:02.688 16:06:30 -- target/zcopy.sh@41 -- # xtrace_disable 00:14:02.688 16:06:30 -- common/autotest_common.sh@10 -- # set +x 00:14:02.688 16:06:30 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:14:02.688 16:06:30 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:14:02.688 16:06:30 -- nvmf/common.sh@521 -- # config=() 00:14:02.688 16:06:30 -- nvmf/common.sh@521 -- # local subsystem config 00:14:02.688 16:06:30 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:14:02.688 16:06:30 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:14:02.688 { 00:14:02.688 "params": { 00:14:02.688 "name": "Nvme$subsystem", 00:14:02.688 "trtype": "$TEST_TRANSPORT", 00:14:02.688 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:02.688 "adrfam": "ipv4", 00:14:02.689 "trsvcid": "$NVMF_PORT", 00:14:02.689 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:02.689 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:02.689 "hdgst": ${hdgst:-false}, 00:14:02.689 "ddgst": ${ddgst:-false} 00:14:02.689 }, 00:14:02.689 "method": "bdev_nvme_attach_controller" 00:14:02.689 } 00:14:02.689 EOF 00:14:02.689 )") 00:14:02.689 [2024-04-15 16:06:30.637924] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.689 [2024-04-15 16:06:30.638126] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.689 16:06:30 -- nvmf/common.sh@543 -- # cat 00:14:02.689 16:06:30 -- nvmf/common.sh@545 -- # jq . 00:14:02.689 16:06:30 -- nvmf/common.sh@546 -- # IFS=, 00:14:02.689 16:06:30 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:14:02.689 "params": { 00:14:02.689 "name": "Nvme1", 00:14:02.689 "trtype": "tcp", 00:14:02.689 "traddr": "10.0.0.2", 00:14:02.689 "adrfam": "ipv4", 00:14:02.689 "trsvcid": "4420", 00:14:02.689 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:02.689 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:02.689 "hdgst": false, 00:14:02.689 "ddgst": false 00:14:02.689 }, 00:14:02.689 "method": "bdev_nvme_attach_controller" 00:14:02.689 }' 00:14:02.689 [2024-04-15 16:06:30.649890] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.689 [2024-04-15 16:06:30.650045] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.689 [2024-04-15 16:06:30.657883] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.689 [2024-04-15 16:06:30.657998] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.689 [2024-04-15 16:06:30.669886] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.689 [2024-04-15 16:06:30.670015] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.689 [2024-04-15 16:06:30.681894] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.689 [2024-04-15 16:06:30.681958] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.689 [2024-04-15 16:06:30.688110] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:14:02.689 [2024-04-15 16:06:30.688507] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80298 ] 00:14:02.689 [2024-04-15 16:06:30.693890] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.689 [2024-04-15 16:06:30.694018] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.689 [2024-04-15 16:06:30.705896] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.689 [2024-04-15 16:06:30.706018] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.689 [2024-04-15 16:06:30.717902] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.689 [2024-04-15 16:06:30.718015] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.689 [2024-04-15 16:06:30.729906] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.689 [2024-04-15 16:06:30.730024] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.689 [2024-04-15 16:06:30.741922] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.689 [2024-04-15 16:06:30.742070] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.689 [2024-04-15 16:06:30.753920] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.689 [2024-04-15 16:06:30.754084] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.689 [2024-04-15 16:06:30.765917] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.689 [2024-04-15 16:06:30.766038] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.689 [2024-04-15 16:06:30.777930] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.689 [2024-04-15 16:06:30.778059] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.689 [2024-04-15 16:06:30.789917] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.689 [2024-04-15 16:06:30.790020] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.689 [2024-04-15 16:06:30.801924] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.689 [2024-04-15 16:06:30.802037] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.689 [2024-04-15 16:06:30.813928] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.689 [2024-04-15 16:06:30.814065] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.689 [2024-04-15 16:06:30.825942] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.689 [2024-04-15 16:06:30.826048] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.689 [2024-04-15 16:06:30.836153] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.689 [2024-04-15 16:06:30.837934] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.689 [2024-04-15 16:06:30.838048] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.689 [2024-04-15 16:06:30.849949] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.689 [2024-04-15 16:06:30.850113] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.689 [2024-04-15 16:06:30.861956] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.689 [2024-04-15 16:06:30.862113] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.689 [2024-04-15 16:06:30.873972] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.689 [2024-04-15 16:06:30.874122] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.689 [2024-04-15 16:06:30.885959] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.689 [2024-04-15 16:06:30.886111] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.689 [2024-04-15 16:06:30.886253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.689 [2024-04-15 16:06:30.895215] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:14:02.689 [2024-04-15 16:06:30.897963] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.689 [2024-04-15 16:06:30.898084] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.689 [2024-04-15 16:06:30.909973] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.689 [2024-04-15 16:06:30.910129] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.689 [2024-04-15 16:06:30.921977] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.689 [2024-04-15 16:06:30.922133] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.689 [2024-04-15 16:06:30.933990] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.689 [2024-04-15 16:06:30.934136] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.689 [2024-04-15 16:06:30.945979] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.689 [2024-04-15 16:06:30.946098] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.689 [2024-04-15 16:06:30.957978] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.689 [2024-04-15 16:06:30.958094] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.689 [2024-04-15 16:06:30.969992] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.689 [2024-04-15 16:06:30.970094] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.689 [2024-04-15 16:06:30.982012] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.689 [2024-04-15 16:06:30.982165] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.689 [2024-04-15 16:06:30.994005] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.689 [2024-04-15 16:06:30.994173] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.689 [2024-04-15 16:06:31.006015] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.689 [2024-04-15 16:06:31.006154] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.689 [2024-04-15 16:06:31.018036] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.689 [2024-04-15 16:06:31.018167] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.689 [2024-04-15 16:06:31.030051] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.689 [2024-04-15 16:06:31.030190] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.689 Running I/O for 5 seconds... 00:14:02.689 [2024-04-15 16:06:31.042075] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.689 [2024-04-15 16:06:31.042193] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.689 [2024-04-15 16:06:31.059987] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.689 [2024-04-15 16:06:31.060150] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.689 [2024-04-15 16:06:31.076496] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.689 [2024-04-15 16:06:31.076666] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.689 [2024-04-15 16:06:31.093978] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.689 [2024-04-15 16:06:31.094125] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.690 [2024-04-15 16:06:31.109464] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.690 [2024-04-15 16:06:31.109618] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.690 [2024-04-15 16:06:31.120923] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.690 [2024-04-15 16:06:31.121067] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.690 [2024-04-15 16:06:31.136822] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.690 [2024-04-15 16:06:31.136990] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.690 [2024-04-15 16:06:31.152908] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.690 [2024-04-15 16:06:31.153054] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.690 [2024-04-15 16:06:31.170642] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.690 [2024-04-15 16:06:31.170797] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.690 [2024-04-15 16:06:31.187247] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.690 [2024-04-15 16:06:31.187403] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.690 [2024-04-15 16:06:31.204594] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.690 [2024-04-15 16:06:31.204768] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.690 [2024-04-15 16:06:31.221079] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.690 [2024-04-15 16:06:31.221234] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.690 [2024-04-15 16:06:31.237431] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.690 [2024-04-15 16:06:31.237591] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.690 [2024-04-15 16:06:31.255018] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.690 [2024-04-15 16:06:31.255173] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.690 [2024-04-15 16:06:31.270692] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.690 [2024-04-15 16:06:31.270825] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.690 [2024-04-15 16:06:31.287699] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.690 [2024-04-15 16:06:31.287844] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.690 [2024-04-15 16:06:31.304383] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.690 [2024-04-15 16:06:31.304536] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.690 [2024-04-15 16:06:31.321992] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.690 [2024-04-15 16:06:31.322161] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.690 [2024-04-15 16:06:31.337163] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.690 [2024-04-15 16:06:31.337328] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.690 [2024-04-15 16:06:31.354109] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.690 [2024-04-15 16:06:31.354288] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.690 [2024-04-15 16:06:31.365510] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.690 [2024-04-15 16:06:31.365672] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.690 [2024-04-15 16:06:31.380847] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.690 [2024-04-15 16:06:31.381028] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.690 [2024-04-15 16:06:31.399062] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.690 [2024-04-15 16:06:31.399213] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.690 [2024-04-15 16:06:31.413971] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.690 [2024-04-15 16:06:31.414123] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.690 [2024-04-15 16:06:31.430784] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.690 [2024-04-15 16:06:31.430949] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.690 [2024-04-15 16:06:31.447253] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.690 [2024-04-15 16:06:31.447419] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.690 [2024-04-15 16:06:31.462950] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.690 [2024-04-15 16:06:31.463131] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.690 [2024-04-15 16:06:31.474819] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.690 [2024-04-15 16:06:31.474983] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.690 [2024-04-15 16:06:31.491701] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.690 [2024-04-15 16:06:31.491868] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.690 [2024-04-15 16:06:31.507460] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.690 [2024-04-15 16:06:31.507618] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.690 [2024-04-15 16:06:31.525651] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.690 [2024-04-15 16:06:31.525811] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.690 [2024-04-15 16:06:31.541480] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.690 [2024-04-15 16:06:31.541657] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.690 [2024-04-15 16:06:31.558127] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.690 [2024-04-15 16:06:31.558293] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.690 [2024-04-15 16:06:31.575226] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.690 [2024-04-15 16:06:31.575403] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.690 [2024-04-15 16:06:31.591220] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.690 [2024-04-15 16:06:31.591390] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.690 [2024-04-15 16:06:31.609059] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.690 [2024-04-15 16:06:31.609236] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.690 [2024-04-15 16:06:31.624279] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.690 [2024-04-15 16:06:31.624459] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.690 [2024-04-15 16:06:31.640423] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.690 [2024-04-15 16:06:31.640609] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.690 [2024-04-15 16:06:31.657556] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.690 [2024-04-15 16:06:31.657755] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.690 [2024-04-15 16:06:31.674339] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.690 [2024-04-15 16:06:31.674529] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.690 [2024-04-15 16:06:31.690667] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.690 [2024-04-15 16:06:31.690826] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.690 [2024-04-15 16:06:31.707054] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.690 [2024-04-15 16:06:31.707237] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.690 [2024-04-15 16:06:31.718846] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.690 [2024-04-15 16:06:31.718996] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.690 [2024-04-15 16:06:31.735159] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.690 [2024-04-15 16:06:31.735320] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.690 [2024-04-15 16:06:31.751847] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.690 [2024-04-15 16:06:31.752002] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.690 [2024-04-15 16:06:31.768591] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.690 [2024-04-15 16:06:31.768758] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.690 [2024-04-15 16:06:31.784682] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.690 [2024-04-15 16:06:31.784813] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.690 [2024-04-15 16:06:31.801922] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.690 [2024-04-15 16:06:31.802081] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.690 [2024-04-15 16:06:31.817422] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.690 [2024-04-15 16:06:31.817584] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.690 [2024-04-15 16:06:31.829145] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.690 [2024-04-15 16:06:31.829288] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.690 [2024-04-15 16:06:31.845278] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.690 [2024-04-15 16:06:31.845440] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.690 [2024-04-15 16:06:31.861471] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.690 [2024-04-15 16:06:31.861659] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.690 [2024-04-15 16:06:31.878565] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.690 [2024-04-15 16:06:31.878732] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.690 [2024-04-15 16:06:31.893855] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.690 [2024-04-15 16:06:31.893997] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.690 [2024-04-15 16:06:31.909337] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.690 [2024-04-15 16:06:31.909494] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.690 [2024-04-15 16:06:31.926156] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.690 [2024-04-15 16:06:31.926309] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.690 [2024-04-15 16:06:31.942761] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.690 [2024-04-15 16:06:31.942923] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.690 [2024-04-15 16:06:31.961133] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.691 [2024-04-15 16:06:31.961296] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.691 [2024-04-15 16:06:31.975147] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.691 [2024-04-15 16:06:31.975299] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.691 [2024-04-15 16:06:31.992361] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.691 [2024-04-15 16:06:31.992515] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.691 [2024-04-15 16:06:32.008507] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.691 [2024-04-15 16:06:32.008677] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.691 [2024-04-15 16:06:32.026265] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.691 [2024-04-15 16:06:32.026440] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.691 [2024-04-15 16:06:32.042757] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.691 [2024-04-15 16:06:32.042905] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.691 [2024-04-15 16:06:32.059214] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.691 [2024-04-15 16:06:32.059387] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.691 [2024-04-15 16:06:32.076072] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.691 [2024-04-15 16:06:32.076225] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.691 [2024-04-15 16:06:32.092324] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.691 [2024-04-15 16:06:32.092483] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.691 [2024-04-15 16:06:32.109453] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.691 [2024-04-15 16:06:32.109643] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.691 [2024-04-15 16:06:32.124817] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.691 [2024-04-15 16:06:32.124993] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.691 [2024-04-15 16:06:32.136413] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.691 [2024-04-15 16:06:32.136589] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.691 [2024-04-15 16:06:32.152438] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.691 [2024-04-15 16:06:32.152602] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.691 [2024-04-15 16:06:32.169455] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.691 [2024-04-15 16:06:32.169634] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.691 [2024-04-15 16:06:32.185439] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.691 [2024-04-15 16:06:32.185587] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.691 [2024-04-15 16:06:32.202765] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.691 [2024-04-15 16:06:32.202903] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.691 [2024-04-15 16:06:32.218637] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.691 [2024-04-15 16:06:32.218795] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.691 [2024-04-15 16:06:32.236654] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.691 [2024-04-15 16:06:32.236797] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.691 [2024-04-15 16:06:32.252429] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.691 [2024-04-15 16:06:32.252597] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.691 [2024-04-15 16:06:32.264072] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.691 [2024-04-15 16:06:32.264223] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.691 [2024-04-15 16:06:32.279790] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.691 [2024-04-15 16:06:32.279934] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.691 [2024-04-15 16:06:32.297225] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.691 [2024-04-15 16:06:32.297372] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.691 [2024-04-15 16:06:32.312195] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.691 [2024-04-15 16:06:32.312355] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.691 [2024-04-15 16:06:32.329686] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.691 [2024-04-15 16:06:32.329833] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.691 [2024-04-15 16:06:32.344570] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.691 [2024-04-15 16:06:32.344769] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.691 [2024-04-15 16:06:32.360181] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.691 [2024-04-15 16:06:32.360354] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.691 [2024-04-15 16:06:32.371219] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.691 [2024-04-15 16:06:32.371396] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.691 [2024-04-15 16:06:32.387356] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.691 [2024-04-15 16:06:32.387527] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.691 [2024-04-15 16:06:32.404874] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.691 [2024-04-15 16:06:32.405053] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.691 [2024-04-15 16:06:32.424850] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.691 [2024-04-15 16:06:32.425054] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.691 [2024-04-15 16:06:32.435083] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.691 [2024-04-15 16:06:32.435246] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.691 [2024-04-15 16:06:32.449589] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.691 [2024-04-15 16:06:32.449761] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.691 [2024-04-15 16:06:32.459214] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.691 [2024-04-15 16:06:32.459391] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.691 [2024-04-15 16:06:32.473695] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.691 [2024-04-15 16:06:32.473862] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.691 [2024-04-15 16:06:32.489509] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.691 [2024-04-15 16:06:32.489702] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.691 [2024-04-15 16:06:32.507599] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.691 [2024-04-15 16:06:32.507743] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.691 [2024-04-15 16:06:32.524017] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.691 [2024-04-15 16:06:32.524163] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.691 [2024-04-15 16:06:32.540830] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.691 [2024-04-15 16:06:32.541004] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.691 [2024-04-15 16:06:32.558136] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.691 [2024-04-15 16:06:32.558315] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.691 [2024-04-15 16:06:32.574011] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.691 [2024-04-15 16:06:32.574188] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.691 [2024-04-15 16:06:32.592134] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.691 [2024-04-15 16:06:32.592289] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.691 [2024-04-15 16:06:32.606436] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.691 [2024-04-15 16:06:32.606626] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.691 [2024-04-15 16:06:32.622511] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.691 [2024-04-15 16:06:32.622691] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.691 [2024-04-15 16:06:32.638617] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.691 [2024-04-15 16:06:32.638790] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.950 [2024-04-15 16:06:32.656606] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.950 [2024-04-15 16:06:32.656766] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.950 [2024-04-15 16:06:32.673562] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.950 [2024-04-15 16:06:32.673734] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.950 [2024-04-15 16:06:32.689573] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.950 [2024-04-15 16:06:32.689743] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.950 [2024-04-15 16:06:32.706668] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.950 [2024-04-15 16:06:32.706835] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.950 [2024-04-15 16:06:32.723417] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.950 [2024-04-15 16:06:32.723590] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.950 [2024-04-15 16:06:32.739722] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.950 [2024-04-15 16:06:32.739873] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.950 [2024-04-15 16:06:32.757183] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.950 [2024-04-15 16:06:32.757334] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.950 [2024-04-15 16:06:32.772873] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.950 [2024-04-15 16:06:32.773036] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.950 [2024-04-15 16:06:32.790455] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.950 [2024-04-15 16:06:32.790622] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.950 [2024-04-15 16:06:32.807108] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.950 [2024-04-15 16:06:32.807263] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.950 [2024-04-15 16:06:32.823900] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.950 [2024-04-15 16:06:32.824055] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.950 [2024-04-15 16:06:32.839868] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.950 [2024-04-15 16:06:32.840027] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.950 [2024-04-15 16:06:32.857484] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.950 [2024-04-15 16:06:32.857668] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.950 [2024-04-15 16:06:32.872999] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.950 [2024-04-15 16:06:32.873157] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.950 [2024-04-15 16:06:32.884957] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.950 [2024-04-15 16:06:32.885132] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.950 [2024-04-15 16:06:32.900864] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.950 [2024-04-15 16:06:32.901075] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.209 [2024-04-15 16:06:32.917224] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.209 [2024-04-15 16:06:32.917383] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.209 [2024-04-15 16:06:32.934871] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.209 [2024-04-15 16:06:32.935037] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.209 [2024-04-15 16:06:32.950181] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.209 [2024-04-15 16:06:32.950336] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.209 [2024-04-15 16:06:32.969334] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.209 [2024-04-15 16:06:32.969504] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.209 [2024-04-15 16:06:32.984037] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.209 [2024-04-15 16:06:32.984190] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.209 [2024-04-15 16:06:33.001209] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.209 [2024-04-15 16:06:33.001364] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.209 [2024-04-15 16:06:33.018931] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.209 [2024-04-15 16:06:33.019079] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.209 [2024-04-15 16:06:33.034483] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.209 [2024-04-15 16:06:33.034651] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.209 [2024-04-15 16:06:33.052554] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.209 [2024-04-15 16:06:33.052715] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.209 [2024-04-15 16:06:33.068465] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.209 [2024-04-15 16:06:33.068628] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.209 [2024-04-15 16:06:33.086456] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.209 [2024-04-15 16:06:33.086618] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.209 [2024-04-15 16:06:33.102238] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.209 [2024-04-15 16:06:33.102385] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.209 [2024-04-15 16:06:33.113791] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.209 [2024-04-15 16:06:33.113930] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.209 [2024-04-15 16:06:33.129650] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.209 [2024-04-15 16:06:33.129794] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.209 [2024-04-15 16:06:33.145490] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.209 [2024-04-15 16:06:33.145663] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.209 [2024-04-15 16:06:33.163087] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.209 [2024-04-15 16:06:33.163240] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.468 [2024-04-15 16:06:33.177858] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.468 [2024-04-15 16:06:33.178016] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.468 [2024-04-15 16:06:33.193959] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.468 [2024-04-15 16:06:33.194118] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.468 [2024-04-15 16:06:33.209869] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.468 [2024-04-15 16:06:33.210021] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.468 [2024-04-15 16:06:33.221695] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.468 [2024-04-15 16:06:33.221844] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.468 [2024-04-15 16:06:33.238049] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.468 [2024-04-15 16:06:33.238196] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.468 [2024-04-15 16:06:33.255149] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.468 [2024-04-15 16:06:33.255299] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.468 [2024-04-15 16:06:33.271879] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.468 [2024-04-15 16:06:33.272019] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.468 [2024-04-15 16:06:33.288370] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.468 [2024-04-15 16:06:33.288510] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.468 [2024-04-15 16:06:33.305003] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.468 [2024-04-15 16:06:33.305162] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.468 [2024-04-15 16:06:33.325995] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.468 [2024-04-15 16:06:33.326171] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.468 [2024-04-15 16:06:33.343333] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.468 [2024-04-15 16:06:33.343502] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.468 [2024-04-15 16:06:33.360333] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.468 [2024-04-15 16:06:33.360496] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.468 [2024-04-15 16:06:33.376757] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.468 [2024-04-15 16:06:33.376928] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.468 [2024-04-15 16:06:33.392691] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.468 [2024-04-15 16:06:33.392842] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.468 [2024-04-15 16:06:33.410283] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.468 [2024-04-15 16:06:33.410451] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.468 [2024-04-15 16:06:33.425686] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.468 [2024-04-15 16:06:33.425839] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.745 [2024-04-15 16:06:33.441044] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.745 [2024-04-15 16:06:33.441197] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.745 [2024-04-15 16:06:33.452017] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.745 [2024-04-15 16:06:33.452151] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.745 [2024-04-15 16:06:33.468374] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.745 [2024-04-15 16:06:33.468529] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.745 [2024-04-15 16:06:33.484927] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.745 [2024-04-15 16:06:33.485080] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.745 [2024-04-15 16:06:33.502197] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.745 [2024-04-15 16:06:33.502350] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.745 [2024-04-15 16:06:33.517879] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.745 [2024-04-15 16:06:33.518070] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.745 [2024-04-15 16:06:33.527114] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.745 [2024-04-15 16:06:33.527271] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.745 [2024-04-15 16:06:33.543363] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.745 [2024-04-15 16:06:33.543562] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.745 [2024-04-15 16:06:33.554823] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.745 [2024-04-15 16:06:33.554981] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.745 [2024-04-15 16:06:33.571702] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.745 [2024-04-15 16:06:33.571873] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.745 [2024-04-15 16:06:33.588428] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.745 [2024-04-15 16:06:33.588614] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.745 [2024-04-15 16:06:33.605928] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.745 [2024-04-15 16:06:33.606126] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.745 [2024-04-15 16:06:33.622282] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.745 [2024-04-15 16:06:33.622446] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.745 [2024-04-15 16:06:33.640318] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.745 [2024-04-15 16:06:33.640493] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.745 [2024-04-15 16:06:33.654247] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.745 [2024-04-15 16:06:33.654402] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.745 [2024-04-15 16:06:33.671028] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.745 [2024-04-15 16:06:33.671206] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.745 [2024-04-15 16:06:33.687925] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.745 [2024-04-15 16:06:33.688072] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.039 [2024-04-15 16:06:33.704886] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.039 [2024-04-15 16:06:33.705055] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.039 [2024-04-15 16:06:33.720549] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.039 [2024-04-15 16:06:33.720717] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.039 [2024-04-15 16:06:33.732202] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.039 [2024-04-15 16:06:33.732345] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.039 [2024-04-15 16:06:33.753062] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.039 [2024-04-15 16:06:33.753233] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.039 [2024-04-15 16:06:33.769741] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.039 [2024-04-15 16:06:33.769899] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.039 [2024-04-15 16:06:33.786519] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.039 [2024-04-15 16:06:33.786700] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.039 [2024-04-15 16:06:33.803604] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.039 [2024-04-15 16:06:33.803791] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.039 [2024-04-15 16:06:33.820507] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.039 [2024-04-15 16:06:33.820700] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.039 [2024-04-15 16:06:33.837174] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.039 [2024-04-15 16:06:33.837339] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.039 [2024-04-15 16:06:33.853730] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.039 [2024-04-15 16:06:33.853917] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.039 [2024-04-15 16:06:33.871847] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.039 [2024-04-15 16:06:33.872005] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.039 [2024-04-15 16:06:33.887720] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.039 [2024-04-15 16:06:33.887895] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.039 [2024-04-15 16:06:33.908123] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.039 [2024-04-15 16:06:33.908291] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.039 [2024-04-15 16:06:33.923137] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.039 [2024-04-15 16:06:33.923339] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.039 [2024-04-15 16:06:33.940061] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.039 [2024-04-15 16:06:33.940249] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.039 [2024-04-15 16:06:33.955921] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.039 [2024-04-15 16:06:33.956078] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.039 [2024-04-15 16:06:33.967616] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.039 [2024-04-15 16:06:33.967789] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.039 [2024-04-15 16:06:33.983911] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.039 [2024-04-15 16:06:33.984069] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.039 [2024-04-15 16:06:34.000665] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.039 [2024-04-15 16:06:34.000836] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.298 [2024-04-15 16:06:34.017728] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.298 [2024-04-15 16:06:34.017888] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.298 [2024-04-15 16:06:34.033130] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.298 [2024-04-15 16:06:34.033286] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.298 [2024-04-15 16:06:34.044591] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.298 [2024-04-15 16:06:34.044738] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.298 [2024-04-15 16:06:34.060968] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.298 [2024-04-15 16:06:34.061122] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.298 [2024-04-15 16:06:34.077491] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.298 [2024-04-15 16:06:34.077690] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.298 [2024-04-15 16:06:34.094086] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.298 [2024-04-15 16:06:34.094250] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.298 [2024-04-15 16:06:34.110655] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.298 [2024-04-15 16:06:34.110802] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.298 [2024-04-15 16:06:34.128237] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.298 [2024-04-15 16:06:34.128399] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.298 [2024-04-15 16:06:34.143355] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.298 [2024-04-15 16:06:34.143513] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.298 [2024-04-15 16:06:34.159176] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.298 [2024-04-15 16:06:34.159333] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.298 [2024-04-15 16:06:34.176211] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.298 [2024-04-15 16:06:34.176366] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.298 [2024-04-15 16:06:34.192870] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.298 [2024-04-15 16:06:34.193046] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.298 [2024-04-15 16:06:34.208738] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.298 [2024-04-15 16:06:34.208900] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.298 [2024-04-15 16:06:34.218246] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.298 [2024-04-15 16:06:34.218401] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.298 [2024-04-15 16:06:34.234385] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.298 [2024-04-15 16:06:34.234589] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.298 [2024-04-15 16:06:34.246116] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.298 [2024-04-15 16:06:34.246301] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.298 [2024-04-15 16:06:34.262533] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.298 [2024-04-15 16:06:34.262717] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.557 [2024-04-15 16:06:34.283168] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.557 [2024-04-15 16:06:34.283327] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.557 [2024-04-15 16:06:34.299999] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.557 [2024-04-15 16:06:34.300142] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.557 [2024-04-15 16:06:34.317156] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.557 [2024-04-15 16:06:34.317303] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.557 [2024-04-15 16:06:34.333268] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.557 [2024-04-15 16:06:34.333413] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.557 [2024-04-15 16:06:34.350305] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.557 [2024-04-15 16:06:34.350451] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.557 [2024-04-15 16:06:34.366597] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.557 [2024-04-15 16:06:34.366745] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.557 [2024-04-15 16:06:34.383123] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.557 [2024-04-15 16:06:34.383269] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.557 [2024-04-15 16:06:34.400253] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.557 [2024-04-15 16:06:34.400411] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.557 [2024-04-15 16:06:34.416695] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.557 [2024-04-15 16:06:34.416839] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.557 [2024-04-15 16:06:34.432889] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.557 [2024-04-15 16:06:34.433050] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.557 [2024-04-15 16:06:34.449669] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.557 [2024-04-15 16:06:34.449806] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.557 [2024-04-15 16:06:34.467651] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.557 [2024-04-15 16:06:34.467782] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.557 [2024-04-15 16:06:34.482808] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.557 [2024-04-15 16:06:34.482951] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.557 [2024-04-15 16:06:34.498499] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.557 [2024-04-15 16:06:34.498682] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.557 [2024-04-15 16:06:34.515613] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.557 [2024-04-15 16:06:34.515755] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.816 [2024-04-15 16:06:34.531600] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.816 [2024-04-15 16:06:34.531752] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.816 [2024-04-15 16:06:34.543176] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.816 [2024-04-15 16:06:34.543322] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.816 [2024-04-15 16:06:34.559766] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.816 [2024-04-15 16:06:34.559922] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.816 [2024-04-15 16:06:34.576428] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.816 [2024-04-15 16:06:34.576602] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.816 [2024-04-15 16:06:34.593325] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.816 [2024-04-15 16:06:34.593467] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.816 [2024-04-15 16:06:34.609719] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.816 [2024-04-15 16:06:34.609869] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.816 [2024-04-15 16:06:34.627340] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.816 [2024-04-15 16:06:34.627491] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.816 [2024-04-15 16:06:34.643062] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.816 [2024-04-15 16:06:34.643268] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.816 [2024-04-15 16:06:34.661047] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.816 [2024-04-15 16:06:34.661209] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.816 [2024-04-15 16:06:34.676492] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.817 [2024-04-15 16:06:34.676654] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.817 [2024-04-15 16:06:34.693812] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.817 [2024-04-15 16:06:34.693958] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.817 [2024-04-15 16:06:34.708008] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.817 [2024-04-15 16:06:34.708161] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.817 [2024-04-15 16:06:34.724737] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.817 [2024-04-15 16:06:34.724903] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.817 [2024-04-15 16:06:34.740095] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.817 [2024-04-15 16:06:34.740250] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.817 [2024-04-15 16:06:34.751268] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.817 [2024-04-15 16:06:34.751407] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.817 [2024-04-15 16:06:34.767374] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.817 [2024-04-15 16:06:34.767530] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.075 [2024-04-15 16:06:34.784371] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.075 [2024-04-15 16:06:34.784523] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.075 [2024-04-15 16:06:34.800886] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.075 [2024-04-15 16:06:34.801051] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.075 [2024-04-15 16:06:34.818490] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.075 [2024-04-15 16:06:34.818661] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.075 [2024-04-15 16:06:34.834871] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.075 [2024-04-15 16:06:34.835013] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.075 [2024-04-15 16:06:34.852381] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.075 [2024-04-15 16:06:34.852530] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.075 [2024-04-15 16:06:34.868450] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.075 [2024-04-15 16:06:34.868628] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.075 [2024-04-15 16:06:34.884792] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.075 [2024-04-15 16:06:34.884983] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.075 [2024-04-15 16:06:34.896387] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.075 [2024-04-15 16:06:34.896530] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.075 [2024-04-15 16:06:34.913230] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.075 [2024-04-15 16:06:34.913381] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.075 [2024-04-15 16:06:34.929253] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.075 [2024-04-15 16:06:34.929430] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.075 [2024-04-15 16:06:34.945326] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.075 [2024-04-15 16:06:34.945473] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.075 [2024-04-15 16:06:34.963034] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.075 [2024-04-15 16:06:34.963206] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.075 [2024-04-15 16:06:34.977320] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.075 [2024-04-15 16:06:34.977478] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.075 [2024-04-15 16:06:34.992935] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.075 [2024-04-15 16:06:34.993099] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.075 [2024-04-15 16:06:35.004569] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.075 [2024-04-15 16:06:35.004734] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.075 [2024-04-15 16:06:35.021395] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.075 [2024-04-15 16:06:35.021542] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.075 [2024-04-15 16:06:35.037555] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.075 [2024-04-15 16:06:35.037715] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.333 [2024-04-15 16:06:35.054642] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.333 [2024-04-15 16:06:35.054800] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.333 [2024-04-15 16:06:35.070964] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.333 [2024-04-15 16:06:35.071121] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.333 [2024-04-15 16:06:35.087951] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.333 [2024-04-15 16:06:35.088110] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.333 [2024-04-15 16:06:35.105598] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.333 [2024-04-15 16:06:35.105761] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.333 [2024-04-15 16:06:35.120403] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.333 [2024-04-15 16:06:35.120559] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.333 [2024-04-15 16:06:35.131697] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.333 [2024-04-15 16:06:35.131836] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.333 [2024-04-15 16:06:35.147169] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.333 [2024-04-15 16:06:35.147312] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.333 [2024-04-15 16:06:35.164753] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.333 [2024-04-15 16:06:35.164901] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.333 [2024-04-15 16:06:35.179931] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.333 [2024-04-15 16:06:35.180062] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.333 [2024-04-15 16:06:35.196207] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.333 [2024-04-15 16:06:35.196347] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.333 [2024-04-15 16:06:35.212056] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.333 [2024-04-15 16:06:35.212188] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.333 [2024-04-15 16:06:35.229923] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.334 [2024-04-15 16:06:35.230080] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.334 [2024-04-15 16:06:35.246663] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.334 [2024-04-15 16:06:35.246825] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.334 [2024-04-15 16:06:35.262791] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.334 [2024-04-15 16:06:35.262930] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.334 [2024-04-15 16:06:35.280621] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.334 [2024-04-15 16:06:35.280763] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.334 [2024-04-15 16:06:35.296402] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.334 [2024-04-15 16:06:35.296539] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.593 [2024-04-15 16:06:35.314373] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.593 [2024-04-15 16:06:35.314505] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.593 [2024-04-15 16:06:35.330301] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.593 [2024-04-15 16:06:35.330452] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.593 [2024-04-15 16:06:35.347381] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.593 [2024-04-15 16:06:35.347532] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.593 [2024-04-15 16:06:35.363879] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.593 [2024-04-15 16:06:35.364011] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.593 [2024-04-15 16:06:35.380799] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.593 [2024-04-15 16:06:35.380959] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.593 [2024-04-15 16:06:35.397833] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.593 [2024-04-15 16:06:35.397985] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.593 [2024-04-15 16:06:35.414862] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.593 [2024-04-15 16:06:35.415000] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.593 [2024-04-15 16:06:35.430387] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.593 [2024-04-15 16:06:35.430541] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.593 [2024-04-15 16:06:35.441410] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.593 [2024-04-15 16:06:35.441559] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.593 [2024-04-15 16:06:35.458162] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.593 [2024-04-15 16:06:35.458310] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.593 [2024-04-15 16:06:35.474442] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.593 [2024-04-15 16:06:35.474628] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.593 [2024-04-15 16:06:35.491714] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.593 [2024-04-15 16:06:35.491862] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.593 [2024-04-15 16:06:35.507491] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.593 [2024-04-15 16:06:35.507639] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.593 [2024-04-15 16:06:35.524958] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.593 [2024-04-15 16:06:35.525096] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.593 [2024-04-15 16:06:35.541175] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.593 [2024-04-15 16:06:35.541341] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.593 [2024-04-15 16:06:35.557559] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.593 [2024-04-15 16:06:35.557727] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.852 [2024-04-15 16:06:35.574979] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.852 [2024-04-15 16:06:35.575133] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.852 [2024-04-15 16:06:35.589761] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.852 [2024-04-15 16:06:35.589903] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.852 [2024-04-15 16:06:35.606125] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.852 [2024-04-15 16:06:35.606276] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.852 [2024-04-15 16:06:35.622781] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.852 [2024-04-15 16:06:35.622944] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.852 [2024-04-15 16:06:35.640466] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.852 [2024-04-15 16:06:35.640638] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.852 [2024-04-15 16:06:35.656023] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.852 [2024-04-15 16:06:35.656166] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.852 [2024-04-15 16:06:35.667171] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.852 [2024-04-15 16:06:35.667320] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.852 [2024-04-15 16:06:35.683040] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.852 [2024-04-15 16:06:35.683169] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.852 [2024-04-15 16:06:35.700101] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.852 [2024-04-15 16:06:35.700237] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.852 [2024-04-15 16:06:35.716003] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.852 [2024-04-15 16:06:35.716137] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.852 [2024-04-15 16:06:35.733702] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.852 [2024-04-15 16:06:35.733841] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.852 [2024-04-15 16:06:35.748764] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.852 [2024-04-15 16:06:35.748908] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.853 [2024-04-15 16:06:35.765261] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.853 [2024-04-15 16:06:35.765406] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.853 [2024-04-15 16:06:35.780994] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.853 [2024-04-15 16:06:35.781128] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.853 [2024-04-15 16:06:35.799122] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.853 [2024-04-15 16:06:35.799265] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.853 [2024-04-15 16:06:35.815448] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.853 [2024-04-15 16:06:35.815597] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.111 [2024-04-15 16:06:35.831915] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.111 [2024-04-15 16:06:35.832051] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.111 [2024-04-15 16:06:35.850055] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.111 [2024-04-15 16:06:35.850214] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.111 [2024-04-15 16:06:35.865265] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.111 [2024-04-15 16:06:35.865398] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.111 [2024-04-15 16:06:35.876171] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.111 [2024-04-15 16:06:35.876307] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.111 [2024-04-15 16:06:35.892507] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.111 [2024-04-15 16:06:35.892657] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.111 [2024-04-15 16:06:35.909338] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.111 [2024-04-15 16:06:35.909493] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.111 [2024-04-15 16:06:35.926841] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.111 [2024-04-15 16:06:35.926980] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.111 [2024-04-15 16:06:35.941900] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.111 [2024-04-15 16:06:35.942044] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.111 [2024-04-15 16:06:35.957127] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.111 [2024-04-15 16:06:35.957265] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.111 [2024-04-15 16:06:35.968614] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.111 [2024-04-15 16:06:35.968747] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.111 [2024-04-15 16:06:35.984139] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.111 [2024-04-15 16:06:35.984270] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.111 [2024-04-15 16:06:36.002346] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.111 [2024-04-15 16:06:36.002509] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.111 [2024-04-15 16:06:36.016779] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.111 [2024-04-15 16:06:36.016993] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.111 [2024-04-15 16:06:36.034418] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.111 [2024-04-15 16:06:36.034630] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.111 [2024-04-15 16:06:36.049638] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.111 [2024-04-15 16:06:36.049808] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.111 00:14:06.111 Latency(us) 00:14:06.111 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:06.111 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:14:06.111 Nvme1n1 : 5.01 13049.27 101.95 0.00 0.00 9796.63 3932.16 22719.15 00:14:06.111 =================================================================================================================== 00:14:06.111 Total : 13049.27 101.95 0.00 0.00 9796.63 3932.16 22719.15 00:14:06.111 [2024-04-15 16:06:36.058908] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.111 [2024-04-15 16:06:36.059055] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.111 [2024-04-15 16:06:36.070909] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.111 [2024-04-15 16:06:36.071068] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.370 [2024-04-15 16:06:36.082919] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.370 [2024-04-15 16:06:36.083101] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.370 [2024-04-15 16:06:36.094919] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.370 [2024-04-15 16:06:36.095123] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.370 [2024-04-15 16:06:36.106936] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.370 [2024-04-15 16:06:36.107157] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.370 [2024-04-15 16:06:36.118932] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.370 [2024-04-15 16:06:36.119122] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.370 [2024-04-15 16:06:36.130947] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.370 [2024-04-15 16:06:36.131150] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.370 [2024-04-15 16:06:36.142949] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.370 [2024-04-15 16:06:36.143147] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.370 [2024-04-15 16:06:36.154946] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.370 [2024-04-15 16:06:36.155132] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.370 [2024-04-15 16:06:36.166945] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.370 [2024-04-15 16:06:36.167120] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.370 [2024-04-15 16:06:36.178939] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.370 [2024-04-15 16:06:36.179081] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.370 [2024-04-15 16:06:36.190950] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.370 [2024-04-15 16:06:36.191139] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.370 [2024-04-15 16:06:36.202943] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.370 [2024-04-15 16:06:36.203068] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.370 [2024-04-15 16:06:36.218969] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.370 [2024-04-15 16:06:36.219184] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.370 [2024-04-15 16:06:36.234960] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.370 [2024-04-15 16:06:36.235119] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.370 [2024-04-15 16:06:36.246955] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.370 [2024-04-15 16:06:36.247100] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.370 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (80298) - No such process 00:14:06.370 16:06:36 -- target/zcopy.sh@49 -- # wait 80298 00:14:06.370 16:06:36 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.370 16:06:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:06.370 16:06:36 -- common/autotest_common.sh@10 -- # set +x 00:14:06.370 16:06:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:06.370 16:06:36 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:06.370 16:06:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:06.370 16:06:36 -- common/autotest_common.sh@10 -- # set +x 00:14:06.370 delay0 00:14:06.370 16:06:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:06.370 16:06:36 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:14:06.370 16:06:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:06.370 16:06:36 -- common/autotest_common.sh@10 -- # set +x 00:14:06.370 16:06:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:06.370 16:06:36 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:14:06.629 [2024-04-15 16:06:36.443137] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:14:13.189 Initializing NVMe Controllers 00:14:13.189 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:13.189 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:13.189 Initialization complete. Launching workers. 00:14:13.189 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 85 00:14:13.189 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 372, failed to submit 33 00:14:13.189 success 246, unsuccess 126, failed 0 00:14:13.189 16:06:42 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:14:13.189 16:06:42 -- target/zcopy.sh@60 -- # nvmftestfini 00:14:13.189 16:06:42 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:13.189 16:06:42 -- nvmf/common.sh@117 -- # sync 00:14:13.189 16:06:42 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:13.189 16:06:42 -- nvmf/common.sh@120 -- # set +e 00:14:13.189 16:06:42 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:13.189 16:06:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:13.189 rmmod nvme_tcp 00:14:13.189 rmmod nvme_fabrics 00:14:13.189 16:06:42 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:13.189 16:06:42 -- nvmf/common.sh@124 -- # set -e 00:14:13.189 16:06:42 -- nvmf/common.sh@125 -- # return 0 00:14:13.189 16:06:42 -- nvmf/common.sh@478 -- # '[' -n 80153 ']' 00:14:13.189 16:06:42 -- nvmf/common.sh@479 -- # killprocess 80153 00:14:13.189 16:06:42 -- common/autotest_common.sh@936 -- # '[' -z 80153 ']' 00:14:13.189 16:06:42 -- common/autotest_common.sh@940 -- # kill -0 80153 00:14:13.189 16:06:42 -- common/autotest_common.sh@941 -- # uname 00:14:13.189 16:06:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:13.189 16:06:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80153 00:14:13.189 16:06:42 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:13.189 16:06:42 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:13.189 16:06:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80153' 00:14:13.189 killing process with pid 80153 00:14:13.189 16:06:42 -- common/autotest_common.sh@955 -- # kill 80153 00:14:13.189 16:06:42 -- common/autotest_common.sh@960 -- # wait 80153 00:14:13.189 16:06:42 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:13.189 16:06:42 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:13.189 16:06:42 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:13.189 16:06:42 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:13.189 16:06:42 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:13.189 16:06:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:13.189 16:06:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:13.189 16:06:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:13.189 16:06:42 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:13.189 00:14:13.189 real 0m24.488s 00:14:13.189 user 0m39.583s 00:14:13.189 sys 0m7.474s 00:14:13.189 ************************************ 00:14:13.189 END TEST nvmf_zcopy 00:14:13.189 ************************************ 00:14:13.189 16:06:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:13.189 16:06:42 -- common/autotest_common.sh@10 -- # set +x 00:14:13.189 16:06:42 -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:14:13.189 16:06:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:13.189 16:06:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:13.189 16:06:42 -- common/autotest_common.sh@10 -- # set +x 00:14:13.189 ************************************ 00:14:13.189 START TEST nvmf_nmic 00:14:13.189 ************************************ 00:14:13.189 16:06:42 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:14:13.189 * Looking for test storage... 00:14:13.189 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:13.189 16:06:43 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:13.189 16:06:43 -- nvmf/common.sh@7 -- # uname -s 00:14:13.189 16:06:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:13.189 16:06:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:13.189 16:06:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:13.189 16:06:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:13.189 16:06:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:13.189 16:06:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:13.189 16:06:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:13.189 16:06:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:13.189 16:06:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:13.189 16:06:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:13.189 16:06:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5cf30adc-e0ee-4255-9853-178d7d30983a 00:14:13.189 16:06:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=5cf30adc-e0ee-4255-9853-178d7d30983a 00:14:13.189 16:06:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:13.189 16:06:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:13.189 16:06:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:13.189 16:06:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:13.189 16:06:43 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:13.189 16:06:43 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:13.189 16:06:43 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:13.189 16:06:43 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:13.189 16:06:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.189 16:06:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.189 16:06:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.189 16:06:43 -- paths/export.sh@5 -- # export PATH 00:14:13.189 16:06:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.189 16:06:43 -- nvmf/common.sh@47 -- # : 0 00:14:13.189 16:06:43 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:13.189 16:06:43 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:13.189 16:06:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:13.189 16:06:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:13.189 16:06:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:13.189 16:06:43 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:13.189 16:06:43 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:13.189 16:06:43 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:13.189 16:06:43 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:13.189 16:06:43 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:13.189 16:06:43 -- target/nmic.sh@14 -- # nvmftestinit 00:14:13.189 16:06:43 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:13.189 16:06:43 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:13.189 16:06:43 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:13.189 16:06:43 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:13.189 16:06:43 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:13.189 16:06:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:13.189 16:06:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:13.189 16:06:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:13.189 16:06:43 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:14:13.189 16:06:43 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:14:13.189 16:06:43 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:14:13.189 16:06:43 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:14:13.189 16:06:43 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:14:13.189 16:06:43 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:14:13.189 16:06:43 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:13.189 16:06:43 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:13.189 16:06:43 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:13.189 16:06:43 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:13.189 16:06:43 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:13.189 16:06:43 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:13.189 16:06:43 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:13.189 16:06:43 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:13.189 16:06:43 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:13.189 16:06:43 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:13.189 16:06:43 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:13.189 16:06:43 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:13.189 16:06:43 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:13.189 16:06:43 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:13.448 Cannot find device "nvmf_tgt_br" 00:14:13.448 16:06:43 -- nvmf/common.sh@155 -- # true 00:14:13.448 16:06:43 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:13.448 Cannot find device "nvmf_tgt_br2" 00:14:13.448 16:06:43 -- nvmf/common.sh@156 -- # true 00:14:13.448 16:06:43 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:13.448 16:06:43 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:13.448 Cannot find device "nvmf_tgt_br" 00:14:13.448 16:06:43 -- nvmf/common.sh@158 -- # true 00:14:13.448 16:06:43 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:13.448 Cannot find device "nvmf_tgt_br2" 00:14:13.448 16:06:43 -- nvmf/common.sh@159 -- # true 00:14:13.448 16:06:43 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:13.448 16:06:43 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:13.448 16:06:43 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:13.448 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:13.448 16:06:43 -- nvmf/common.sh@162 -- # true 00:14:13.448 16:06:43 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:13.448 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:13.448 16:06:43 -- nvmf/common.sh@163 -- # true 00:14:13.448 16:06:43 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:13.448 16:06:43 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:13.448 16:06:43 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:13.448 16:06:43 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:13.448 16:06:43 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:13.448 16:06:43 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:13.448 16:06:43 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:13.448 16:06:43 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:13.448 16:06:43 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:13.448 16:06:43 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:13.448 16:06:43 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:13.448 16:06:43 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:13.448 16:06:43 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:13.448 16:06:43 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:13.448 16:06:43 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:13.448 16:06:43 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:13.448 16:06:43 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:13.448 16:06:43 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:13.706 16:06:43 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:13.706 16:06:43 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:13.706 16:06:43 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:13.706 16:06:43 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:13.706 16:06:43 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:13.706 16:06:43 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:13.706 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:13.706 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:14:13.706 00:14:13.706 --- 10.0.0.2 ping statistics --- 00:14:13.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:13.706 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:14:13.706 16:06:43 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:13.706 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:13.706 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:14:13.706 00:14:13.706 --- 10.0.0.3 ping statistics --- 00:14:13.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:13.706 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:14:13.706 16:06:43 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:13.706 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:13.706 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:14:13.706 00:14:13.706 --- 10.0.0.1 ping statistics --- 00:14:13.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:13.706 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:14:13.706 16:06:43 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:13.706 16:06:43 -- nvmf/common.sh@422 -- # return 0 00:14:13.706 16:06:43 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:13.706 16:06:43 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:13.706 16:06:43 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:13.706 16:06:43 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:13.706 16:06:43 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:13.706 16:06:43 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:13.706 16:06:43 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:13.706 16:06:43 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:14:13.706 16:06:43 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:13.706 16:06:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:13.706 16:06:43 -- common/autotest_common.sh@10 -- # set +x 00:14:13.706 16:06:43 -- nvmf/common.sh@470 -- # nvmfpid=80629 00:14:13.706 16:06:43 -- nvmf/common.sh@471 -- # waitforlisten 80629 00:14:13.706 16:06:43 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:13.706 16:06:43 -- common/autotest_common.sh@817 -- # '[' -z 80629 ']' 00:14:13.706 16:06:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:13.706 16:06:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:13.706 16:06:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:13.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:13.706 16:06:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:13.706 16:06:43 -- common/autotest_common.sh@10 -- # set +x 00:14:13.706 [2024-04-15 16:06:43.574889] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:14:13.706 [2024-04-15 16:06:43.575194] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:13.964 [2024-04-15 16:06:43.725893] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:13.964 [2024-04-15 16:06:43.775258] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:13.964 [2024-04-15 16:06:43.775493] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:13.964 [2024-04-15 16:06:43.775641] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:13.964 [2024-04-15 16:06:43.775699] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:13.964 [2024-04-15 16:06:43.775742] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:13.964 [2024-04-15 16:06:43.775919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:13.964 [2024-04-15 16:06:43.776503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:13.964 [2024-04-15 16:06:43.776694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:13.964 [2024-04-15 16:06:43.776696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.530 16:06:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:14.530 16:06:44 -- common/autotest_common.sh@850 -- # return 0 00:14:14.530 16:06:44 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:14.530 16:06:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:14.530 16:06:44 -- common/autotest_common.sh@10 -- # set +x 00:14:14.788 16:06:44 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:14.788 16:06:44 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:14.788 16:06:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:14.788 16:06:44 -- common/autotest_common.sh@10 -- # set +x 00:14:14.788 [2024-04-15 16:06:44.532305] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:14.788 16:06:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:14.788 16:06:44 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:14.788 16:06:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:14.788 16:06:44 -- common/autotest_common.sh@10 -- # set +x 00:14:14.788 Malloc0 00:14:14.788 16:06:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:14.788 16:06:44 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:14.788 16:06:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:14.788 16:06:44 -- common/autotest_common.sh@10 -- # set +x 00:14:14.788 16:06:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:14.788 16:06:44 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:14.788 16:06:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:14.788 16:06:44 -- common/autotest_common.sh@10 -- # set +x 00:14:14.788 16:06:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:14.788 16:06:44 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:14.788 16:06:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:14.788 16:06:44 -- common/autotest_common.sh@10 -- # set +x 00:14:14.788 [2024-04-15 16:06:44.599828] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:14.788 16:06:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:14.788 test case1: single bdev can't be used in multiple subsystems 00:14:14.789 16:06:44 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:14:14.789 16:06:44 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:14:14.789 16:06:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:14.789 16:06:44 -- common/autotest_common.sh@10 -- # set +x 00:14:14.789 16:06:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:14.789 16:06:44 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:14:14.789 16:06:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:14.789 16:06:44 -- common/autotest_common.sh@10 -- # set +x 00:14:14.789 16:06:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:14.789 16:06:44 -- target/nmic.sh@28 -- # nmic_status=0 00:14:14.789 16:06:44 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:14:14.789 16:06:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:14.789 16:06:44 -- common/autotest_common.sh@10 -- # set +x 00:14:14.789 [2024-04-15 16:06:44.623638] bdev.c:7988:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:14:14.789 [2024-04-15 16:06:44.623792] subsystem.c:1930:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:14:14.789 [2024-04-15 16:06:44.623923] nvmf_rpc.c:1525:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.789 request: 00:14:14.789 { 00:14:14.789 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:14:14.789 "namespace": { 00:14:14.789 "bdev_name": "Malloc0", 00:14:14.789 "no_auto_visible": false 00:14:14.789 }, 00:14:14.789 "method": "nvmf_subsystem_add_ns", 00:14:14.789 "req_id": 1 00:14:14.789 } 00:14:14.789 Got JSON-RPC error response 00:14:14.789 response: 00:14:14.789 { 00:14:14.789 "code": -32602, 00:14:14.789 "message": "Invalid parameters" 00:14:14.789 } 00:14:14.789 16:06:44 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:14:14.789 16:06:44 -- target/nmic.sh@29 -- # nmic_status=1 00:14:14.789 16:06:44 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:14:14.789 Adding namespace failed - expected result. 00:14:14.789 16:06:44 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:14:14.789 test case2: host connect to nvmf target in multiple paths 00:14:14.789 16:06:44 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:14:14.789 16:06:44 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:14:14.789 16:06:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:14.789 16:06:44 -- common/autotest_common.sh@10 -- # set +x 00:14:14.789 [2024-04-15 16:06:44.635771] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:14:14.789 16:06:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:14.789 16:06:44 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5cf30adc-e0ee-4255-9853-178d7d30983a --hostid=5cf30adc-e0ee-4255-9853-178d7d30983a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:15.064 16:06:44 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5cf30adc-e0ee-4255-9853-178d7d30983a --hostid=5cf30adc-e0ee-4255-9853-178d7d30983a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:14:15.064 16:06:44 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:14:15.064 16:06:44 -- common/autotest_common.sh@1184 -- # local i=0 00:14:15.064 16:06:44 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:14:15.064 16:06:44 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:14:15.064 16:06:44 -- common/autotest_common.sh@1191 -- # sleep 2 00:14:16.994 16:06:46 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:14:16.994 16:06:46 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:14:16.994 16:06:46 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:14:16.994 16:06:46 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:14:16.994 16:06:46 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:14:16.994 16:06:46 -- common/autotest_common.sh@1194 -- # return 0 00:14:16.994 16:06:46 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:14:16.994 [global] 00:14:16.994 thread=1 00:14:16.994 invalidate=1 00:14:16.994 rw=write 00:14:16.994 time_based=1 00:14:16.994 runtime=1 00:14:16.994 ioengine=libaio 00:14:16.994 direct=1 00:14:16.994 bs=4096 00:14:16.994 iodepth=1 00:14:16.994 norandommap=0 00:14:16.994 numjobs=1 00:14:16.994 00:14:16.994 verify_dump=1 00:14:16.994 verify_backlog=512 00:14:16.994 verify_state_save=0 00:14:16.994 do_verify=1 00:14:16.994 verify=crc32c-intel 00:14:16.994 [job0] 00:14:16.994 filename=/dev/nvme0n1 00:14:16.994 Could not set queue depth (nvme0n1) 00:14:17.252 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:17.252 fio-3.35 00:14:17.252 Starting 1 thread 00:14:18.628 00:14:18.628 job0: (groupid=0, jobs=1): err= 0: pid=80721: Mon Apr 15 16:06:48 2024 00:14:18.628 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:14:18.628 slat (nsec): min=7498, max=38466, avg=10423.39, stdev=2626.58 00:14:18.628 clat (usec): min=114, max=563, avg=155.55, stdev=20.62 00:14:18.628 lat (usec): min=122, max=576, avg=165.98, stdev=20.97 00:14:18.628 clat percentiles (usec): 00:14:18.628 | 1.00th=[ 121], 5.00th=[ 128], 10.00th=[ 133], 20.00th=[ 141], 00:14:18.628 | 30.00th=[ 147], 40.00th=[ 151], 50.00th=[ 155], 60.00th=[ 159], 00:14:18.628 | 70.00th=[ 163], 80.00th=[ 169], 90.00th=[ 176], 95.00th=[ 184], 00:14:18.628 | 99.00th=[ 202], 99.50th=[ 215], 99.90th=[ 351], 99.95th=[ 429], 00:14:18.628 | 99.99th=[ 562] 00:14:18.628 write: IOPS=3627, BW=14.2MiB/s (14.9MB/s)(14.2MiB/1001msec); 0 zone resets 00:14:18.628 slat (nsec): min=11668, max=92118, avg=16386.71, stdev=4645.47 00:14:18.628 clat (usec): min=67, max=451, avg=92.80, stdev=16.16 00:14:18.628 lat (usec): min=79, max=470, avg=109.19, stdev=17.44 00:14:18.628 clat percentiles (usec): 00:14:18.628 | 1.00th=[ 74], 5.00th=[ 77], 10.00th=[ 79], 20.00th=[ 83], 00:14:18.628 | 30.00th=[ 86], 40.00th=[ 89], 50.00th=[ 92], 60.00th=[ 94], 00:14:18.628 | 70.00th=[ 97], 80.00th=[ 101], 90.00th=[ 108], 95.00th=[ 115], 00:14:18.628 | 99.00th=[ 131], 99.50th=[ 139], 99.90th=[ 314], 99.95th=[ 437], 00:14:18.628 | 99.99th=[ 453] 00:14:18.628 bw ( KiB/s): min=16384, max=16384, per=100.00%, avg=16384.00, stdev= 0.00, samples=1 00:14:18.628 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:14:18.628 lat (usec) : 100=39.65%, 250=60.12%, 500=0.21%, 750=0.01% 00:14:18.628 cpu : usr=2.50%, sys=7.50%, ctx=7216, majf=0, minf=2 00:14:18.628 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:18.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:18.628 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:18.628 issued rwts: total=3584,3631,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:18.628 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:18.628 00:14:18.628 Run status group 0 (all jobs): 00:14:18.628 READ: bw=14.0MiB/s (14.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=14.0MiB (14.7MB), run=1001-1001msec 00:14:18.628 WRITE: bw=14.2MiB/s (14.9MB/s), 14.2MiB/s-14.2MiB/s (14.9MB/s-14.9MB/s), io=14.2MiB (14.9MB), run=1001-1001msec 00:14:18.628 00:14:18.628 Disk stats (read/write): 00:14:18.628 nvme0n1: ios=3122/3480, merge=0/0, ticks=491/349, in_queue=840, util=91.28% 00:14:18.628 16:06:48 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:18.628 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:14:18.628 16:06:48 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:18.628 16:06:48 -- common/autotest_common.sh@1205 -- # local i=0 00:14:18.628 16:06:48 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:18.628 16:06:48 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:14:18.628 16:06:48 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:14:18.628 16:06:48 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:18.628 16:06:48 -- common/autotest_common.sh@1217 -- # return 0 00:14:18.628 16:06:48 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:14:18.628 16:06:48 -- target/nmic.sh@53 -- # nvmftestfini 00:14:18.628 16:06:48 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:18.628 16:06:48 -- nvmf/common.sh@117 -- # sync 00:14:18.628 16:06:48 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:18.628 16:06:48 -- nvmf/common.sh@120 -- # set +e 00:14:18.628 16:06:48 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:18.628 16:06:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:18.628 rmmod nvme_tcp 00:14:18.628 rmmod nvme_fabrics 00:14:18.628 16:06:48 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:18.628 16:06:48 -- nvmf/common.sh@124 -- # set -e 00:14:18.628 16:06:48 -- nvmf/common.sh@125 -- # return 0 00:14:18.628 16:06:48 -- nvmf/common.sh@478 -- # '[' -n 80629 ']' 00:14:18.628 16:06:48 -- nvmf/common.sh@479 -- # killprocess 80629 00:14:18.628 16:06:48 -- common/autotest_common.sh@936 -- # '[' -z 80629 ']' 00:14:18.628 16:06:48 -- common/autotest_common.sh@940 -- # kill -0 80629 00:14:18.628 16:06:48 -- common/autotest_common.sh@941 -- # uname 00:14:18.628 16:06:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:18.628 16:06:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80629 00:14:18.628 16:06:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:18.628 killing process with pid 80629 00:14:18.628 16:06:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:18.628 16:06:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80629' 00:14:18.628 16:06:48 -- common/autotest_common.sh@955 -- # kill 80629 00:14:18.628 16:06:48 -- common/autotest_common.sh@960 -- # wait 80629 00:14:18.887 16:06:48 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:18.887 16:06:48 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:18.887 16:06:48 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:18.887 16:06:48 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:18.887 16:06:48 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:18.887 16:06:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:18.887 16:06:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:18.887 16:06:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:18.887 16:06:48 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:18.887 ************************************ 00:14:18.887 END TEST nvmf_nmic 00:14:18.887 ************************************ 00:14:18.887 00:14:18.887 real 0m5.671s 00:14:18.887 user 0m17.614s 00:14:18.887 sys 0m2.573s 00:14:18.887 16:06:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:18.887 16:06:48 -- common/autotest_common.sh@10 -- # set +x 00:14:18.887 16:06:48 -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:14:18.887 16:06:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:18.887 16:06:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:18.887 16:06:48 -- common/autotest_common.sh@10 -- # set +x 00:14:18.887 ************************************ 00:14:18.887 START TEST nvmf_fio_target 00:14:18.887 ************************************ 00:14:18.887 16:06:48 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:14:19.146 * Looking for test storage... 00:14:19.146 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:19.146 16:06:48 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:19.146 16:06:48 -- nvmf/common.sh@7 -- # uname -s 00:14:19.146 16:06:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:19.146 16:06:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:19.146 16:06:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:19.146 16:06:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:19.146 16:06:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:19.146 16:06:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:19.146 16:06:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:19.146 16:06:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:19.146 16:06:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:19.146 16:06:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:19.146 16:06:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5cf30adc-e0ee-4255-9853-178d7d30983a 00:14:19.146 16:06:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=5cf30adc-e0ee-4255-9853-178d7d30983a 00:14:19.146 16:06:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:19.146 16:06:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:19.146 16:06:48 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:19.146 16:06:48 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:19.146 16:06:48 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:19.146 16:06:48 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:19.146 16:06:48 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:19.146 16:06:48 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:19.146 16:06:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.146 16:06:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.146 16:06:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.146 16:06:48 -- paths/export.sh@5 -- # export PATH 00:14:19.146 16:06:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.146 16:06:48 -- nvmf/common.sh@47 -- # : 0 00:14:19.146 16:06:48 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:19.146 16:06:48 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:19.146 16:06:48 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:19.146 16:06:48 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:19.146 16:06:48 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:19.146 16:06:48 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:19.146 16:06:48 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:19.146 16:06:48 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:19.146 16:06:48 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:19.146 16:06:48 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:19.146 16:06:48 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:19.146 16:06:48 -- target/fio.sh@16 -- # nvmftestinit 00:14:19.146 16:06:48 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:19.146 16:06:48 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:19.146 16:06:48 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:19.146 16:06:48 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:19.146 16:06:48 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:19.146 16:06:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:19.146 16:06:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:19.146 16:06:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:19.146 16:06:48 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:14:19.146 16:06:48 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:14:19.146 16:06:48 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:14:19.146 16:06:48 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:14:19.146 16:06:48 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:14:19.146 16:06:48 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:14:19.146 16:06:48 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:19.146 16:06:48 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:19.146 16:06:48 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:19.146 16:06:48 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:19.146 16:06:48 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:19.146 16:06:48 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:19.146 16:06:48 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:19.146 16:06:48 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:19.146 16:06:48 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:19.146 16:06:48 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:19.146 16:06:48 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:19.146 16:06:48 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:19.146 16:06:48 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:19.146 16:06:48 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:19.146 Cannot find device "nvmf_tgt_br" 00:14:19.146 16:06:48 -- nvmf/common.sh@155 -- # true 00:14:19.146 16:06:48 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:19.146 Cannot find device "nvmf_tgt_br2" 00:14:19.146 16:06:48 -- nvmf/common.sh@156 -- # true 00:14:19.146 16:06:48 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:19.146 16:06:48 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:19.146 Cannot find device "nvmf_tgt_br" 00:14:19.146 16:06:49 -- nvmf/common.sh@158 -- # true 00:14:19.146 16:06:49 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:19.146 Cannot find device "nvmf_tgt_br2" 00:14:19.146 16:06:49 -- nvmf/common.sh@159 -- # true 00:14:19.146 16:06:49 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:19.146 16:06:49 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:19.146 16:06:49 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:19.146 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:19.146 16:06:49 -- nvmf/common.sh@162 -- # true 00:14:19.146 16:06:49 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:19.146 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:19.146 16:06:49 -- nvmf/common.sh@163 -- # true 00:14:19.146 16:06:49 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:19.146 16:06:49 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:19.464 16:06:49 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:19.464 16:06:49 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:19.464 16:06:49 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:19.464 16:06:49 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:19.464 16:06:49 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:19.464 16:06:49 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:19.464 16:06:49 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:19.464 16:06:49 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:19.464 16:06:49 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:19.464 16:06:49 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:19.464 16:06:49 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:19.464 16:06:49 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:19.464 16:06:49 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:19.464 16:06:49 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:19.464 16:06:49 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:19.464 16:06:49 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:19.464 16:06:49 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:19.464 16:06:49 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:19.464 16:06:49 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:19.464 16:06:49 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:19.464 16:06:49 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:19.464 16:06:49 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:19.464 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:19.464 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:14:19.464 00:14:19.464 --- 10.0.0.2 ping statistics --- 00:14:19.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:19.464 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:14:19.464 16:06:49 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:19.464 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:19.464 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:14:19.464 00:14:19.464 --- 10.0.0.3 ping statistics --- 00:14:19.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:19.464 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:14:19.464 16:06:49 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:19.464 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:19.464 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:14:19.464 00:14:19.464 --- 10.0.0.1 ping statistics --- 00:14:19.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:19.464 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:14:19.464 16:06:49 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:19.464 16:06:49 -- nvmf/common.sh@422 -- # return 0 00:14:19.464 16:06:49 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:19.464 16:06:49 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:19.464 16:06:49 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:19.464 16:06:49 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:19.464 16:06:49 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:19.464 16:06:49 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:19.464 16:06:49 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:19.464 16:06:49 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:14:19.464 16:06:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:19.464 16:06:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:19.464 16:06:49 -- common/autotest_common.sh@10 -- # set +x 00:14:19.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:19.464 16:06:49 -- nvmf/common.sh@470 -- # nvmfpid=80903 00:14:19.464 16:06:49 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:19.464 16:06:49 -- nvmf/common.sh@471 -- # waitforlisten 80903 00:14:19.464 16:06:49 -- common/autotest_common.sh@817 -- # '[' -z 80903 ']' 00:14:19.464 16:06:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:19.464 16:06:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:19.464 16:06:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:19.464 16:06:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:19.464 16:06:49 -- common/autotest_common.sh@10 -- # set +x 00:14:19.723 [2024-04-15 16:06:49.419521] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:14:19.723 [2024-04-15 16:06:49.419827] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:19.723 [2024-04-15 16:06:49.564934] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:19.723 [2024-04-15 16:06:49.613163] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:19.723 [2024-04-15 16:06:49.613385] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:19.723 [2024-04-15 16:06:49.613537] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:19.723 [2024-04-15 16:06:49.613715] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:19.723 [2024-04-15 16:06:49.613766] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:19.723 [2024-04-15 16:06:49.614007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:19.723 [2024-04-15 16:06:49.614159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:19.723 [2024-04-15 16:06:49.614243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.723 [2024-04-15 16:06:49.614244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:19.982 16:06:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:19.982 16:06:49 -- common/autotest_common.sh@850 -- # return 0 00:14:19.982 16:06:49 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:19.982 16:06:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:19.982 16:06:49 -- common/autotest_common.sh@10 -- # set +x 00:14:19.982 16:06:49 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:19.982 16:06:49 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:19.982 [2024-04-15 16:06:49.935462] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:20.240 16:06:49 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:20.499 16:06:50 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:14:20.499 16:06:50 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:20.757 16:06:50 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:14:20.757 16:06:50 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:21.015 16:06:50 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:14:21.015 16:06:50 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:21.273 16:06:51 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:14:21.273 16:06:51 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:14:21.531 16:06:51 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:21.788 16:06:51 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:14:21.788 16:06:51 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:22.047 16:06:51 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:14:22.047 16:06:51 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:22.305 16:06:52 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:14:22.305 16:06:52 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:14:22.305 16:06:52 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:22.563 16:06:52 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:22.563 16:06:52 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:23.130 16:06:52 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:23.130 16:06:52 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:23.130 16:06:53 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:23.397 [2024-04-15 16:06:53.221676] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:23.397 16:06:53 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:14:23.708 16:06:53 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:14:23.985 16:06:53 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5cf30adc-e0ee-4255-9853-178d7d30983a --hostid=5cf30adc-e0ee-4255-9853-178d7d30983a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:23.985 16:06:53 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:14:23.985 16:06:53 -- common/autotest_common.sh@1184 -- # local i=0 00:14:23.985 16:06:53 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:14:23.985 16:06:53 -- common/autotest_common.sh@1186 -- # [[ -n 4 ]] 00:14:23.985 16:06:53 -- common/autotest_common.sh@1187 -- # nvme_device_counter=4 00:14:23.985 16:06:53 -- common/autotest_common.sh@1191 -- # sleep 2 00:14:25.900 16:06:55 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:14:25.900 16:06:55 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:14:25.900 16:06:55 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:14:25.900 16:06:55 -- common/autotest_common.sh@1193 -- # nvme_devices=4 00:14:25.900 16:06:55 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:14:25.900 16:06:55 -- common/autotest_common.sh@1194 -- # return 0 00:14:25.900 16:06:55 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:14:26.157 [global] 00:14:26.157 thread=1 00:14:26.157 invalidate=1 00:14:26.157 rw=write 00:14:26.157 time_based=1 00:14:26.157 runtime=1 00:14:26.157 ioengine=libaio 00:14:26.157 direct=1 00:14:26.157 bs=4096 00:14:26.157 iodepth=1 00:14:26.157 norandommap=0 00:14:26.157 numjobs=1 00:14:26.157 00:14:26.157 verify_dump=1 00:14:26.157 verify_backlog=512 00:14:26.157 verify_state_save=0 00:14:26.157 do_verify=1 00:14:26.157 verify=crc32c-intel 00:14:26.157 [job0] 00:14:26.157 filename=/dev/nvme0n1 00:14:26.157 [job1] 00:14:26.157 filename=/dev/nvme0n2 00:14:26.157 [job2] 00:14:26.157 filename=/dev/nvme0n3 00:14:26.157 [job3] 00:14:26.157 filename=/dev/nvme0n4 00:14:26.157 Could not set queue depth (nvme0n1) 00:14:26.157 Could not set queue depth (nvme0n2) 00:14:26.157 Could not set queue depth (nvme0n3) 00:14:26.157 Could not set queue depth (nvme0n4) 00:14:26.157 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:26.157 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:26.157 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:26.157 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:26.157 fio-3.35 00:14:26.157 Starting 4 threads 00:14:27.528 00:14:27.528 job0: (groupid=0, jobs=1): err= 0: pid=81075: Mon Apr 15 16:06:57 2024 00:14:27.528 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:14:27.528 slat (nsec): min=7934, max=43349, avg=10070.60, stdev=2517.21 00:14:27.528 clat (usec): min=129, max=303, avg=166.03, stdev=14.52 00:14:27.528 lat (usec): min=138, max=315, avg=176.10, stdev=15.09 00:14:27.528 clat percentiles (usec): 00:14:27.528 | 1.00th=[ 139], 5.00th=[ 147], 10.00th=[ 149], 20.00th=[ 155], 00:14:27.528 | 30.00th=[ 159], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 169], 00:14:27.528 | 70.00th=[ 172], 80.00th=[ 178], 90.00th=[ 186], 95.00th=[ 194], 00:14:27.528 | 99.00th=[ 208], 99.50th=[ 212], 99.90th=[ 227], 99.95th=[ 235], 00:14:27.528 | 99.99th=[ 306] 00:14:27.528 write: IOPS=3209, BW=12.5MiB/s (13.1MB/s)(12.6MiB/1001msec); 0 zone resets 00:14:27.528 slat (usec): min=10, max=112, avg=16.71, stdev= 4.82 00:14:27.528 clat (usec): min=80, max=629, avg=123.81, stdev=17.61 00:14:27.528 lat (usec): min=96, max=642, avg=140.52, stdev=19.09 00:14:27.528 clat percentiles (usec): 00:14:27.528 | 1.00th=[ 96], 5.00th=[ 102], 10.00th=[ 106], 20.00th=[ 111], 00:14:27.528 | 30.00th=[ 116], 40.00th=[ 119], 50.00th=[ 123], 60.00th=[ 127], 00:14:27.528 | 70.00th=[ 131], 80.00th=[ 137], 90.00th=[ 145], 95.00th=[ 151], 00:14:27.528 | 99.00th=[ 165], 99.50th=[ 169], 99.90th=[ 202], 99.95th=[ 239], 00:14:27.528 | 99.99th=[ 627] 00:14:27.528 bw ( KiB/s): min=12288, max=12288, per=24.74%, avg=12288.00, stdev= 0.00, samples=1 00:14:27.528 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:14:27.528 lat (usec) : 100=1.69%, 250=98.28%, 500=0.02%, 750=0.02% 00:14:27.528 cpu : usr=2.20%, sys=6.60%, ctx=6287, majf=0, minf=10 00:14:27.528 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:27.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:27.528 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:27.529 issued rwts: total=3072,3213,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:27.529 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:27.529 job1: (groupid=0, jobs=1): err= 0: pid=81076: Mon Apr 15 16:06:57 2024 00:14:27.529 read: IOPS=2923, BW=11.4MiB/s (12.0MB/s)(11.4MiB/1001msec) 00:14:27.529 slat (nsec): min=8044, max=37127, avg=12241.72, stdev=3141.19 00:14:27.529 clat (usec): min=130, max=232, avg=169.81, stdev=15.31 00:14:27.529 lat (usec): min=139, max=254, avg=182.05, stdev=16.02 00:14:27.529 clat percentiles (usec): 00:14:27.529 | 1.00th=[ 141], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 157], 00:14:27.529 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 172], 00:14:27.529 | 70.00th=[ 176], 80.00th=[ 182], 90.00th=[ 190], 95.00th=[ 198], 00:14:27.529 | 99.00th=[ 215], 99.50th=[ 221], 99.90th=[ 227], 99.95th=[ 227], 00:14:27.529 | 99.99th=[ 233] 00:14:27.529 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:14:27.529 slat (usec): min=12, max=225, avg=19.79, stdev= 7.98 00:14:27.529 clat (usec): min=89, max=202, avg=129.50, stdev=14.31 00:14:27.529 lat (usec): min=105, max=373, avg=149.29, stdev=17.65 00:14:27.529 clat percentiles (usec): 00:14:27.529 | 1.00th=[ 102], 5.00th=[ 109], 10.00th=[ 112], 20.00th=[ 118], 00:14:27.529 | 30.00th=[ 122], 40.00th=[ 126], 50.00th=[ 129], 60.00th=[ 133], 00:14:27.529 | 70.00th=[ 137], 80.00th=[ 143], 90.00th=[ 149], 95.00th=[ 155], 00:14:27.529 | 99.00th=[ 167], 99.50th=[ 174], 99.90th=[ 180], 99.95th=[ 202], 00:14:27.529 | 99.99th=[ 202] 00:14:27.529 bw ( KiB/s): min=12288, max=12288, per=24.74%, avg=12288.00, stdev= 0.00, samples=1 00:14:27.529 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:14:27.529 lat (usec) : 100=0.27%, 250=99.73% 00:14:27.529 cpu : usr=1.60%, sys=8.40%, ctx=5998, majf=0, minf=11 00:14:27.529 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:27.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:27.529 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:27.529 issued rwts: total=2926,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:27.529 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:27.529 job2: (groupid=0, jobs=1): err= 0: pid=81077: Mon Apr 15 16:06:57 2024 00:14:27.529 read: IOPS=2800, BW=10.9MiB/s (11.5MB/s)(10.9MiB/1001msec) 00:14:27.529 slat (nsec): min=8028, max=39668, avg=10011.45, stdev=2248.26 00:14:27.529 clat (usec): min=141, max=2608, avg=179.09, stdev=51.66 00:14:27.529 lat (usec): min=149, max=2621, avg=189.10, stdev=51.85 00:14:27.529 clat percentiles (usec): 00:14:27.529 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 165], 00:14:27.529 | 30.00th=[ 169], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 180], 00:14:27.529 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 200], 95.00th=[ 206], 00:14:27.529 | 99.00th=[ 223], 99.50th=[ 231], 99.90th=[ 750], 99.95th=[ 865], 00:14:27.529 | 99.99th=[ 2606] 00:14:27.529 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:14:27.529 slat (nsec): min=12451, max=73662, avg=15978.48, stdev=3915.40 00:14:27.529 clat (usec): min=99, max=218, avg=134.95, stdev=14.34 00:14:27.529 lat (usec): min=112, max=265, avg=150.93, stdev=15.61 00:14:27.529 clat percentiles (usec): 00:14:27.529 | 1.00th=[ 109], 5.00th=[ 115], 10.00th=[ 119], 20.00th=[ 123], 00:14:27.529 | 30.00th=[ 127], 40.00th=[ 130], 50.00th=[ 135], 60.00th=[ 139], 00:14:27.529 | 70.00th=[ 141], 80.00th=[ 147], 90.00th=[ 155], 95.00th=[ 161], 00:14:27.529 | 99.00th=[ 176], 99.50th=[ 180], 99.90th=[ 186], 99.95th=[ 192], 00:14:27.529 | 99.99th=[ 219] 00:14:27.529 bw ( KiB/s): min=12312, max=12312, per=24.79%, avg=12312.00, stdev= 0.00, samples=1 00:14:27.529 iops : min= 3078, max= 3078, avg=3078.00, stdev= 0.00, samples=1 00:14:27.529 lat (usec) : 100=0.02%, 250=99.88%, 500=0.03%, 750=0.03%, 1000=0.02% 00:14:27.529 lat (msec) : 4=0.02% 00:14:27.529 cpu : usr=1.30%, sys=6.70%, ctx=5893, majf=0, minf=9 00:14:27.529 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:27.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:27.529 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:27.529 issued rwts: total=2803,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:27.529 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:27.529 job3: (groupid=0, jobs=1): err= 0: pid=81079: Mon Apr 15 16:06:57 2024 00:14:27.529 read: IOPS=2751, BW=10.7MiB/s (11.3MB/s)(10.8MiB/1001msec) 00:14:27.529 slat (nsec): min=8020, max=31931, avg=10295.87, stdev=2557.12 00:14:27.529 clat (usec): min=139, max=775, avg=177.51, stdev=21.21 00:14:27.529 lat (usec): min=148, max=784, avg=187.81, stdev=21.61 00:14:27.529 clat percentiles (usec): 00:14:27.529 | 1.00th=[ 149], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 165], 00:14:27.529 | 30.00th=[ 169], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 180], 00:14:27.529 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 198], 95.00th=[ 204], 00:14:27.529 | 99.00th=[ 221], 99.50th=[ 231], 99.90th=[ 474], 99.95th=[ 586], 00:14:27.529 | 99.99th=[ 775] 00:14:27.529 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:14:27.529 slat (nsec): min=10334, max=98361, avg=16370.47, stdev=4203.79 00:14:27.529 clat (usec): min=96, max=207, avg=138.57, stdev=14.72 00:14:27.529 lat (usec): min=115, max=306, avg=154.94, stdev=15.73 00:14:27.529 clat percentiles (usec): 00:14:27.529 | 1.00th=[ 110], 5.00th=[ 116], 10.00th=[ 121], 20.00th=[ 127], 00:14:27.529 | 30.00th=[ 131], 40.00th=[ 135], 50.00th=[ 139], 60.00th=[ 143], 00:14:27.529 | 70.00th=[ 147], 80.00th=[ 151], 90.00th=[ 159], 95.00th=[ 165], 00:14:27.529 | 99.00th=[ 176], 99.50th=[ 180], 99.90th=[ 194], 99.95th=[ 206], 00:14:27.529 | 99.99th=[ 208] 00:14:27.529 bw ( KiB/s): min=12288, max=12288, per=24.74%, avg=12288.00, stdev= 0.00, samples=1 00:14:27.529 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:14:27.529 lat (usec) : 100=0.02%, 250=99.91%, 500=0.03%, 750=0.02%, 1000=0.02% 00:14:27.529 cpu : usr=1.10%, sys=7.00%, ctx=5827, majf=0, minf=5 00:14:27.529 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:27.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:27.529 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:27.529 issued rwts: total=2754,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:27.529 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:27.529 00:14:27.529 Run status group 0 (all jobs): 00:14:27.529 READ: bw=45.1MiB/s (47.3MB/s), 10.7MiB/s-12.0MiB/s (11.3MB/s-12.6MB/s), io=45.1MiB (47.3MB), run=1001-1001msec 00:14:27.529 WRITE: bw=48.5MiB/s (50.9MB/s), 12.0MiB/s-12.5MiB/s (12.6MB/s-13.1MB/s), io=48.6MiB (50.9MB), run=1001-1001msec 00:14:27.529 00:14:27.529 Disk stats (read/write): 00:14:27.529 nvme0n1: ios=2610/2711, merge=0/0, ticks=456/355, in_queue=811, util=85.97% 00:14:27.529 nvme0n2: ios=2444/2560, merge=0/0, ticks=444/363, in_queue=807, util=86.35% 00:14:27.529 nvme0n3: ios=2377/2560, merge=0/0, ticks=438/367, in_queue=805, util=88.94% 00:14:27.530 nvme0n4: ios=2328/2560, merge=0/0, ticks=418/374, in_queue=792, util=89.60% 00:14:27.530 16:06:57 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:14:27.530 [global] 00:14:27.530 thread=1 00:14:27.530 invalidate=1 00:14:27.530 rw=randwrite 00:14:27.530 time_based=1 00:14:27.530 runtime=1 00:14:27.530 ioengine=libaio 00:14:27.530 direct=1 00:14:27.530 bs=4096 00:14:27.530 iodepth=1 00:14:27.530 norandommap=0 00:14:27.530 numjobs=1 00:14:27.530 00:14:27.530 verify_dump=1 00:14:27.530 verify_backlog=512 00:14:27.530 verify_state_save=0 00:14:27.530 do_verify=1 00:14:27.530 verify=crc32c-intel 00:14:27.530 [job0] 00:14:27.530 filename=/dev/nvme0n1 00:14:27.530 [job1] 00:14:27.530 filename=/dev/nvme0n2 00:14:27.530 [job2] 00:14:27.530 filename=/dev/nvme0n3 00:14:27.530 [job3] 00:14:27.530 filename=/dev/nvme0n4 00:14:27.530 Could not set queue depth (nvme0n1) 00:14:27.530 Could not set queue depth (nvme0n2) 00:14:27.530 Could not set queue depth (nvme0n3) 00:14:27.530 Could not set queue depth (nvme0n4) 00:14:27.530 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:27.530 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:27.530 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:27.530 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:27.530 fio-3.35 00:14:27.530 Starting 4 threads 00:14:28.905 00:14:28.905 job0: (groupid=0, jobs=1): err= 0: pid=81142: Mon Apr 15 16:06:58 2024 00:14:28.905 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:14:28.905 slat (nsec): min=7704, max=33492, avg=9941.55, stdev=2211.61 00:14:28.905 clat (usec): min=120, max=747, avg=159.86, stdev=19.41 00:14:28.905 lat (usec): min=128, max=756, avg=169.80, stdev=20.05 00:14:28.905 clat percentiles (usec): 00:14:28.905 | 1.00th=[ 135], 5.00th=[ 141], 10.00th=[ 143], 20.00th=[ 149], 00:14:28.905 | 30.00th=[ 151], 40.00th=[ 155], 50.00th=[ 159], 60.00th=[ 163], 00:14:28.905 | 70.00th=[ 165], 80.00th=[ 172], 90.00th=[ 178], 95.00th=[ 184], 00:14:28.905 | 99.00th=[ 198], 99.50th=[ 204], 99.90th=[ 231], 99.95th=[ 578], 00:14:28.905 | 99.99th=[ 750] 00:14:28.905 write: IOPS=3519, BW=13.7MiB/s (14.4MB/s)(13.8MiB/1001msec); 0 zone resets 00:14:28.905 slat (usec): min=9, max=165, avg=16.10, stdev= 5.35 00:14:28.905 clat (usec): min=73, max=586, avg=117.47, stdev=15.61 00:14:28.905 lat (usec): min=86, max=600, avg=133.57, stdev=17.63 00:14:28.905 clat percentiles (usec): 00:14:28.905 | 1.00th=[ 92], 5.00th=[ 98], 10.00th=[ 101], 20.00th=[ 106], 00:14:28.905 | 30.00th=[ 110], 40.00th=[ 114], 50.00th=[ 117], 60.00th=[ 120], 00:14:28.905 | 70.00th=[ 124], 80.00th=[ 129], 90.00th=[ 135], 95.00th=[ 141], 00:14:28.905 | 99.00th=[ 153], 99.50th=[ 159], 99.90th=[ 174], 99.95th=[ 215], 00:14:28.905 | 99.99th=[ 586] 00:14:28.905 bw ( KiB/s): min=13344, max=13344, per=31.24%, avg=13344.00, stdev= 0.00, samples=1 00:14:28.905 iops : min= 3336, max= 3336, avg=3336.00, stdev= 0.00, samples=1 00:14:28.905 lat (usec) : 100=4.46%, 250=95.48%, 500=0.02%, 750=0.05% 00:14:28.905 cpu : usr=1.60%, sys=7.50%, ctx=6596, majf=0, minf=11 00:14:28.905 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:28.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:28.905 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:28.905 issued rwts: total=3072,3523,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:28.905 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:28.905 job1: (groupid=0, jobs=1): err= 0: pid=81143: Mon Apr 15 16:06:58 2024 00:14:28.905 read: IOPS=1758, BW=7033KiB/s (7202kB/s)(7040KiB/1001msec) 00:14:28.905 slat (nsec): min=8037, max=67570, avg=11591.61, stdev=3687.25 00:14:28.905 clat (usec): min=166, max=657, avg=283.73, stdev=39.67 00:14:28.905 lat (usec): min=176, max=666, avg=295.32, stdev=40.16 00:14:28.905 clat percentiles (usec): 00:14:28.905 | 1.00th=[ 231], 5.00th=[ 247], 10.00th=[ 253], 20.00th=[ 262], 00:14:28.905 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 281], 00:14:28.905 | 70.00th=[ 289], 80.00th=[ 297], 90.00th=[ 314], 95.00th=[ 367], 00:14:28.905 | 99.00th=[ 437], 99.50th=[ 490], 99.90th=[ 652], 99.95th=[ 660], 00:14:28.905 | 99.99th=[ 660] 00:14:28.905 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:14:28.905 slat (usec): min=12, max=123, avg=19.69, stdev= 8.73 00:14:28.905 clat (usec): min=99, max=533, avg=212.25, stdev=50.23 00:14:28.905 lat (usec): min=117, max=548, avg=231.95, stdev=54.75 00:14:28.905 clat percentiles (usec): 00:14:28.905 | 1.00th=[ 115], 5.00th=[ 129], 10.00th=[ 174], 20.00th=[ 190], 00:14:28.905 | 30.00th=[ 196], 40.00th=[ 202], 50.00th=[ 206], 60.00th=[ 210], 00:14:28.905 | 70.00th=[ 219], 80.00th=[ 227], 90.00th=[ 260], 95.00th=[ 334], 00:14:28.905 | 99.00th=[ 367], 99.50th=[ 375], 99.90th=[ 437], 99.95th=[ 502], 00:14:28.905 | 99.99th=[ 537] 00:14:28.905 bw ( KiB/s): min= 8192, max= 8192, per=19.18%, avg=8192.00, stdev= 0.00, samples=1 00:14:28.905 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:14:28.905 lat (usec) : 100=0.03%, 250=51.58%, 500=48.14%, 750=0.26% 00:14:28.905 cpu : usr=1.30%, sys=4.90%, ctx=3809, majf=0, minf=13 00:14:28.905 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:28.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:28.905 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:28.905 issued rwts: total=1760,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:28.905 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:28.905 job2: (groupid=0, jobs=1): err= 0: pid=81144: Mon Apr 15 16:06:58 2024 00:14:28.905 read: IOPS=1855, BW=7421KiB/s (7599kB/s)(7428KiB/1001msec) 00:14:28.905 slat (nsec): min=8012, max=52313, avg=10880.16, stdev=3693.63 00:14:28.905 clat (usec): min=184, max=1054, avg=287.30, stdev=49.46 00:14:28.905 lat (usec): min=196, max=1072, avg=298.18, stdev=50.69 00:14:28.905 clat percentiles (usec): 00:14:28.905 | 1.00th=[ 225], 5.00th=[ 245], 10.00th=[ 251], 20.00th=[ 262], 00:14:28.905 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 285], 00:14:28.905 | 70.00th=[ 289], 80.00th=[ 302], 90.00th=[ 326], 95.00th=[ 371], 00:14:28.905 | 99.00th=[ 502], 99.50th=[ 537], 99.90th=[ 685], 99.95th=[ 1057], 00:14:28.905 | 99.99th=[ 1057] 00:14:28.905 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:14:28.905 slat (usec): min=12, max=114, avg=18.47, stdev= 7.05 00:14:28.905 clat (usec): min=106, max=381, avg=196.87, stdev=31.50 00:14:28.905 lat (usec): min=124, max=408, avg=215.33, stdev=31.77 00:14:28.905 clat percentiles (usec): 00:14:28.905 | 1.00th=[ 116], 5.00th=[ 128], 10.00th=[ 141], 20.00th=[ 182], 00:14:28.905 | 30.00th=[ 192], 40.00th=[ 198], 50.00th=[ 202], 60.00th=[ 208], 00:14:28.905 | 70.00th=[ 212], 80.00th=[ 219], 90.00th=[ 229], 95.00th=[ 237], 00:14:28.905 | 99.00th=[ 255], 99.50th=[ 269], 99.90th=[ 302], 99.95th=[ 371], 00:14:28.905 | 99.99th=[ 383] 00:14:28.905 bw ( KiB/s): min= 8192, max= 8192, per=19.18%, avg=8192.00, stdev= 0.00, samples=1 00:14:28.905 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:14:28.905 lat (usec) : 250=55.57%, 500=43.94%, 750=0.46% 00:14:28.905 lat (msec) : 2=0.03% 00:14:28.905 cpu : usr=1.10%, sys=5.00%, ctx=3905, majf=0, minf=14 00:14:28.905 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:28.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:28.905 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:28.905 issued rwts: total=1857,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:28.905 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:28.905 job3: (groupid=0, jobs=1): err= 0: pid=81145: Mon Apr 15 16:06:58 2024 00:14:28.905 read: IOPS=3030, BW=11.8MiB/s (12.4MB/s)(11.9MiB/1001msec) 00:14:28.905 slat (nsec): min=7759, max=45793, avg=9887.13, stdev=2577.27 00:14:28.905 clat (usec): min=136, max=833, avg=169.78, stdev=19.28 00:14:28.905 lat (usec): min=145, max=841, avg=179.66, stdev=19.83 00:14:28.905 clat percentiles (usec): 00:14:28.905 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 159], 00:14:28.905 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 172], 00:14:28.905 | 70.00th=[ 176], 80.00th=[ 180], 90.00th=[ 188], 95.00th=[ 194], 00:14:28.905 | 99.00th=[ 206], 99.50th=[ 215], 99.90th=[ 241], 99.95th=[ 570], 00:14:28.905 | 99.99th=[ 832] 00:14:28.905 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:14:28.905 slat (usec): min=12, max=102, avg=16.72, stdev= 6.25 00:14:28.905 clat (usec): min=90, max=6964, avg=129.05, stdev=146.64 00:14:28.905 lat (usec): min=103, max=6978, avg=145.76, stdev=147.10 00:14:28.905 clat percentiles (usec): 00:14:28.905 | 1.00th=[ 100], 5.00th=[ 106], 10.00th=[ 110], 20.00th=[ 115], 00:14:28.906 | 30.00th=[ 118], 40.00th=[ 121], 50.00th=[ 124], 60.00th=[ 127], 00:14:28.906 | 70.00th=[ 130], 80.00th=[ 135], 90.00th=[ 143], 95.00th=[ 149], 00:14:28.906 | 99.00th=[ 165], 99.50th=[ 172], 99.90th=[ 750], 99.95th=[ 3720], 00:14:28.906 | 99.99th=[ 6980] 00:14:28.906 bw ( KiB/s): min=12288, max=12288, per=28.76%, avg=12288.00, stdev= 0.00, samples=1 00:14:28.906 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:14:28.906 lat (usec) : 100=0.51%, 250=99.38%, 500=0.02%, 750=0.02%, 1000=0.03% 00:14:28.906 lat (msec) : 4=0.03%, 10=0.02% 00:14:28.906 cpu : usr=1.50%, sys=7.20%, ctx=6129, majf=0, minf=9 00:14:28.906 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:28.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:28.906 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:28.906 issued rwts: total=3034,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:28.906 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:28.906 00:14:28.906 Run status group 0 (all jobs): 00:14:28.906 READ: bw=37.9MiB/s (39.8MB/s), 7033KiB/s-12.0MiB/s (7202kB/s-12.6MB/s), io=38.0MiB (39.8MB), run=1001-1001msec 00:14:28.906 WRITE: bw=41.7MiB/s (43.7MB/s), 8184KiB/s-13.7MiB/s (8380kB/s-14.4MB/s), io=41.8MiB (43.8MB), run=1001-1001msec 00:14:28.906 00:14:28.906 Disk stats (read/write): 00:14:28.906 nvme0n1: ios=2661/3072, merge=0/0, ticks=434/381, in_queue=815, util=88.58% 00:14:28.906 nvme0n2: ios=1583/1780, merge=0/0, ticks=458/391, in_queue=849, util=88.59% 00:14:28.906 nvme0n3: ios=1536/1864, merge=0/0, ticks=440/391, in_queue=831, util=89.18% 00:14:28.906 nvme0n4: ios=2560/2709, merge=0/0, ticks=448/351, in_queue=799, util=88.69% 00:14:28.906 16:06:58 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:14:28.906 [global] 00:14:28.906 thread=1 00:14:28.906 invalidate=1 00:14:28.906 rw=write 00:14:28.906 time_based=1 00:14:28.906 runtime=1 00:14:28.906 ioengine=libaio 00:14:28.906 direct=1 00:14:28.906 bs=4096 00:14:28.906 iodepth=128 00:14:28.906 norandommap=0 00:14:28.906 numjobs=1 00:14:28.906 00:14:28.906 verify_dump=1 00:14:28.906 verify_backlog=512 00:14:28.906 verify_state_save=0 00:14:28.906 do_verify=1 00:14:28.906 verify=crc32c-intel 00:14:28.906 [job0] 00:14:28.906 filename=/dev/nvme0n1 00:14:28.906 [job1] 00:14:28.906 filename=/dev/nvme0n2 00:14:28.906 [job2] 00:14:28.906 filename=/dev/nvme0n3 00:14:28.906 [job3] 00:14:28.906 filename=/dev/nvme0n4 00:14:28.906 Could not set queue depth (nvme0n1) 00:14:28.906 Could not set queue depth (nvme0n2) 00:14:28.906 Could not set queue depth (nvme0n3) 00:14:28.906 Could not set queue depth (nvme0n4) 00:14:28.906 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:28.906 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:28.906 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:28.906 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:28.906 fio-3.35 00:14:28.906 Starting 4 threads 00:14:30.280 00:14:30.280 job0: (groupid=0, jobs=1): err= 0: pid=81198: Mon Apr 15 16:07:00 2024 00:14:30.280 read: IOPS=2037, BW=8151KiB/s (8347kB/s)(8192KiB/1005msec) 00:14:30.280 slat (usec): min=8, max=10465, avg=211.59, stdev=875.37 00:14:30.280 clat (usec): min=18303, max=49288, avg=26859.25, stdev=5028.82 00:14:30.280 lat (usec): min=18695, max=54281, avg=27070.84, stdev=5102.21 00:14:30.280 clat percentiles (usec): 00:14:30.280 | 1.00th=[18744], 5.00th=[21890], 10.00th=[22152], 20.00th=[22414], 00:14:30.280 | 30.00th=[22676], 40.00th=[23462], 50.00th=[25822], 60.00th=[27657], 00:14:30.280 | 70.00th=[30016], 80.00th=[31589], 90.00th=[32900], 95.00th=[34341], 00:14:30.280 | 99.00th=[43254], 99.50th=[46924], 99.90th=[49021], 99.95th=[49021], 00:14:30.280 | 99.99th=[49546] 00:14:30.280 write: IOPS=2159, BW=8637KiB/s (8844kB/s)(8680KiB/1005msec); 0 zone resets 00:14:30.280 slat (usec): min=11, max=6623, avg=250.63, stdev=877.92 00:14:30.280 clat (usec): min=2790, max=65822, avg=33077.68, stdev=14481.23 00:14:30.280 lat (usec): min=7300, max=65878, avg=33328.31, stdev=14569.15 00:14:30.280 clat percentiles (usec): 00:14:30.280 | 1.00th=[15401], 5.00th=[16319], 10.00th=[17171], 20.00th=[17695], 00:14:30.280 | 30.00th=[21627], 40.00th=[22676], 50.00th=[31589], 60.00th=[39584], 00:14:30.280 | 70.00th=[40633], 80.00th=[45876], 90.00th=[54264], 95.00th=[60031], 00:14:30.280 | 99.00th=[64226], 99.50th=[65799], 99.90th=[65799], 99.95th=[65799], 00:14:30.280 | 99.99th=[65799] 00:14:30.280 bw ( KiB/s): min= 8208, max= 8224, per=13.56%, avg=8216.00, stdev=11.31, samples=2 00:14:30.280 iops : min= 2052, max= 2056, avg=2054.00, stdev= 2.83, samples=2 00:14:30.280 lat (msec) : 4=0.02%, 10=0.19%, 20=13.58%, 50=78.45%, 100=7.75% 00:14:30.280 cpu : usr=2.79%, sys=7.67%, ctx=266, majf=0, minf=11 00:14:30.280 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:14:30.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:30.280 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:30.280 issued rwts: total=2048,2170,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:30.280 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:30.280 job1: (groupid=0, jobs=1): err= 0: pid=81199: Mon Apr 15 16:07:00 2024 00:14:30.280 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 00:14:30.280 slat (usec): min=8, max=3550, avg=93.07, stdev=359.02 00:14:30.280 clat (usec): min=8925, max=15570, avg=12261.79, stdev=1008.20 00:14:30.280 lat (usec): min=8942, max=15594, avg=12354.86, stdev=1046.96 00:14:30.280 clat percentiles (usec): 00:14:30.280 | 1.00th=[ 9503], 5.00th=[10290], 10.00th=[10945], 20.00th=[11731], 00:14:30.280 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12256], 60.00th=[12387], 00:14:30.280 | 70.00th=[12518], 80.00th=[12911], 90.00th=[13566], 95.00th=[13960], 00:14:30.280 | 99.00th=[14746], 99.50th=[15008], 99.90th=[15533], 99.95th=[15533], 00:14:30.280 | 99.99th=[15533] 00:14:30.280 write: IOPS=5309, BW=20.7MiB/s (21.7MB/s)(20.8MiB/1004msec); 0 zone resets 00:14:30.280 slat (usec): min=9, max=3514, avg=89.68, stdev=383.94 00:14:30.280 clat (usec): min=3695, max=16425, avg=12046.92, stdev=1098.64 00:14:30.280 lat (usec): min=3708, max=16442, avg=12136.60, stdev=1147.31 00:14:30.280 clat percentiles (usec): 00:14:30.280 | 1.00th=[ 8848], 5.00th=[10814], 10.00th=[11207], 20.00th=[11469], 00:14:30.280 | 30.00th=[11600], 40.00th=[11863], 50.00th=[11994], 60.00th=[12125], 00:14:30.280 | 70.00th=[12387], 80.00th=[12518], 90.00th=[13042], 95.00th=[14091], 00:14:30.280 | 99.00th=[15139], 99.50th=[15533], 99.90th=[16057], 99.95th=[16319], 00:14:30.280 | 99.99th=[16450] 00:14:30.280 bw ( KiB/s): min=20480, max=21152, per=34.37%, avg=20816.00, stdev=475.18, samples=2 00:14:30.280 iops : min= 5120, max= 5288, avg=5204.00, stdev=118.79, samples=2 00:14:30.280 lat (msec) : 4=0.02%, 10=2.81%, 20=97.17% 00:14:30.280 cpu : usr=5.08%, sys=13.86%, ctx=550, majf=0, minf=8 00:14:30.280 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:14:30.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:30.280 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:30.280 issued rwts: total=5120,5331,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:30.280 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:30.280 job2: (groupid=0, jobs=1): err= 0: pid=81200: Mon Apr 15 16:07:00 2024 00:14:30.280 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:14:30.280 slat (usec): min=6, max=6081, avg=191.86, stdev=733.20 00:14:30.280 clat (usec): min=13941, max=45864, avg=24395.39, stdev=6496.72 00:14:30.280 lat (usec): min=15607, max=45880, avg=24587.26, stdev=6513.31 00:14:30.280 clat percentiles (usec): 00:14:30.280 | 1.00th=[15664], 5.00th=[17695], 10.00th=[18482], 20.00th=[19792], 00:14:30.280 | 30.00th=[20055], 40.00th=[20317], 50.00th=[21103], 60.00th=[24511], 00:14:30.280 | 70.00th=[27919], 80.00th=[29492], 90.00th=[30540], 95.00th=[39584], 00:14:30.280 | 99.00th=[45351], 99.50th=[45351], 99.90th=[45876], 99.95th=[45876], 00:14:30.280 | 99.99th=[45876] 00:14:30.280 write: IOPS=3039, BW=11.9MiB/s (12.5MB/s)(11.9MiB/1004msec); 0 zone resets 00:14:30.280 slat (usec): min=7, max=8739, avg=157.90, stdev=777.12 00:14:30.280 clat (usec): min=1460, max=39857, avg=20747.75, stdev=5497.18 00:14:30.280 lat (usec): min=5505, max=39909, avg=20905.64, stdev=5473.62 00:14:30.280 clat percentiles (usec): 00:14:30.280 | 1.00th=[ 6194], 5.00th=[15401], 10.00th=[16057], 20.00th=[16319], 00:14:30.280 | 30.00th=[16712], 40.00th=[17433], 50.00th=[20579], 60.00th=[21365], 00:14:30.280 | 70.00th=[22152], 80.00th=[25560], 90.00th=[29230], 95.00th=[29754], 00:14:30.280 | 99.00th=[34866], 99.50th=[39584], 99.90th=[39584], 99.95th=[39584], 00:14:30.280 | 99.99th=[40109] 00:14:30.280 bw ( KiB/s): min=11104, max=12312, per=19.33%, avg=11708.00, stdev=854.18, samples=2 00:14:30.280 iops : min= 2776, max= 3078, avg=2927.00, stdev=213.55, samples=2 00:14:30.280 lat (msec) : 2=0.02%, 10=0.98%, 20=38.88%, 50=60.12% 00:14:30.280 cpu : usr=3.19%, sys=8.77%, ctx=289, majf=0, minf=13 00:14:30.280 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:14:30.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:30.280 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:30.280 issued rwts: total=2560,3052,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:30.280 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:30.280 job3: (groupid=0, jobs=1): err= 0: pid=81201: Mon Apr 15 16:07:00 2024 00:14:30.280 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:14:30.280 slat (usec): min=7, max=4218, avg=104.63, stdev=408.14 00:14:30.280 clat (usec): min=7251, max=18252, avg=13768.22, stdev=1145.00 00:14:30.280 lat (usec): min=7264, max=18274, avg=13872.85, stdev=1189.79 00:14:30.280 clat percentiles (usec): 00:14:30.280 | 1.00th=[10290], 5.00th=[11600], 10.00th=[12649], 20.00th=[13304], 00:14:30.280 | 30.00th=[13435], 40.00th=[13566], 50.00th=[13698], 60.00th=[13829], 00:14:30.280 | 70.00th=[13960], 80.00th=[14222], 90.00th=[15401], 95.00th=[15795], 00:14:30.280 | 99.00th=[16712], 99.50th=[17171], 99.90th=[17957], 99.95th=[17957], 00:14:30.280 | 99.99th=[18220] 00:14:30.280 write: IOPS=4647, BW=18.2MiB/s (19.0MB/s)(18.2MiB/1004msec); 0 zone resets 00:14:30.280 slat (usec): min=7, max=3998, avg=101.73, stdev=470.63 00:14:30.280 clat (usec): min=3387, max=18253, avg=13580.68, stdev=1444.24 00:14:30.280 lat (usec): min=4029, max=18271, avg=13682.41, stdev=1505.72 00:14:30.280 clat percentiles (usec): 00:14:30.280 | 1.00th=[ 6980], 5.00th=[12256], 10.00th=[12649], 20.00th=[12911], 00:14:30.280 | 30.00th=[13173], 40.00th=[13304], 50.00th=[13566], 60.00th=[13829], 00:14:30.280 | 70.00th=[13960], 80.00th=[14091], 90.00th=[14615], 95.00th=[16319], 00:14:30.280 | 99.00th=[17695], 99.50th=[17957], 99.90th=[18220], 99.95th=[18220], 00:14:30.280 | 99.99th=[18220] 00:14:30.280 bw ( KiB/s): min=17096, max=19807, per=30.46%, avg=18451.50, stdev=1916.97, samples=2 00:14:30.281 iops : min= 4274, max= 4951, avg=4612.50, stdev=478.71, samples=2 00:14:30.281 lat (msec) : 4=0.01%, 10=1.05%, 20=98.94% 00:14:30.281 cpu : usr=4.79%, sys=13.06%, ctx=456, majf=0, minf=7 00:14:30.281 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:14:30.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:30.281 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:30.281 issued rwts: total=4608,4666,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:30.281 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:30.281 00:14:30.281 Run status group 0 (all jobs): 00:14:30.281 READ: bw=55.7MiB/s (58.4MB/s), 8151KiB/s-19.9MiB/s (8347kB/s-20.9MB/s), io=56.0MiB (58.7MB), run=1004-1005msec 00:14:30.281 WRITE: bw=59.2MiB/s (62.0MB/s), 8637KiB/s-20.7MiB/s (8844kB/s-21.7MB/s), io=59.4MiB (62.3MB), run=1004-1005msec 00:14:30.281 00:14:30.281 Disk stats (read/write): 00:14:30.281 nvme0n1: ios=1585/1962, merge=0/0, ticks=13823/20561, in_queue=34384, util=86.83% 00:14:30.281 nvme0n2: ios=4207/4608, merge=0/0, ticks=16373/16067, in_queue=32440, util=86.56% 00:14:30.281 nvme0n3: ios=2208/2560, merge=0/0, ticks=13101/11725, in_queue=24826, util=87.98% 00:14:30.281 nvme0n4: ios=3710/4096, merge=0/0, ticks=16300/16086, in_queue=32386, util=89.56% 00:14:30.281 16:07:00 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:14:30.281 [global] 00:14:30.281 thread=1 00:14:30.281 invalidate=1 00:14:30.281 rw=randwrite 00:14:30.281 time_based=1 00:14:30.281 runtime=1 00:14:30.281 ioengine=libaio 00:14:30.281 direct=1 00:14:30.281 bs=4096 00:14:30.281 iodepth=128 00:14:30.281 norandommap=0 00:14:30.281 numjobs=1 00:14:30.281 00:14:30.281 verify_dump=1 00:14:30.281 verify_backlog=512 00:14:30.281 verify_state_save=0 00:14:30.281 do_verify=1 00:14:30.281 verify=crc32c-intel 00:14:30.281 [job0] 00:14:30.281 filename=/dev/nvme0n1 00:14:30.281 [job1] 00:14:30.281 filename=/dev/nvme0n2 00:14:30.281 [job2] 00:14:30.281 filename=/dev/nvme0n3 00:14:30.281 [job3] 00:14:30.281 filename=/dev/nvme0n4 00:14:30.281 Could not set queue depth (nvme0n1) 00:14:30.281 Could not set queue depth (nvme0n2) 00:14:30.281 Could not set queue depth (nvme0n3) 00:14:30.281 Could not set queue depth (nvme0n4) 00:14:30.539 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:30.539 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:30.539 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:30.539 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:30.539 fio-3.35 00:14:30.539 Starting 4 threads 00:14:31.915 00:14:31.915 job0: (groupid=0, jobs=1): err= 0: pid=81255: Mon Apr 15 16:07:01 2024 00:14:31.915 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 00:14:31.915 slat (usec): min=8, max=6168, avg=88.19, stdev=540.19 00:14:31.915 clat (usec): min=7253, max=19885, avg=12407.41, stdev=1363.28 00:14:31.915 lat (usec): min=7307, max=23977, avg=12495.60, stdev=1391.35 00:14:31.915 clat percentiles (usec): 00:14:31.915 | 1.00th=[ 7963], 5.00th=[10814], 10.00th=[11338], 20.00th=[11994], 00:14:31.915 | 30.00th=[12125], 40.00th=[12256], 50.00th=[12518], 60.00th=[12649], 00:14:31.915 | 70.00th=[12780], 80.00th=[12911], 90.00th=[13173], 95.00th=[13435], 00:14:31.915 | 99.00th=[19268], 99.50th=[19530], 99.90th=[19792], 99.95th=[19792], 00:14:31.915 | 99.99th=[19792] 00:14:31.915 write: IOPS=5455, BW=21.3MiB/s (22.3MB/s)(21.4MiB/1004msec); 0 zone resets 00:14:31.915 slat (usec): min=8, max=8208, avg=91.73, stdev=562.73 00:14:31.915 clat (usec): min=3443, max=16737, avg=11631.68, stdev=1394.27 00:14:31.915 lat (usec): min=3466, max=16761, avg=11723.41, stdev=1308.35 00:14:31.915 clat percentiles (usec): 00:14:31.915 | 1.00th=[ 6915], 5.00th=[ 9765], 10.00th=[10552], 20.00th=[10945], 00:14:31.915 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11600], 60.00th=[11863], 00:14:31.915 | 70.00th=[12125], 80.00th=[12256], 90.00th=[12649], 95.00th=[13829], 00:14:31.915 | 99.00th=[16319], 99.50th=[16450], 99.90th=[16712], 99.95th=[16712], 00:14:31.915 | 99.99th=[16712] 00:14:31.915 bw ( KiB/s): min=20880, max=21963, per=35.02%, avg=21421.50, stdev=765.80, samples=2 00:14:31.915 iops : min= 5220, max= 5490, avg=5355.00, stdev=190.92, samples=2 00:14:31.915 lat (msec) : 4=0.19%, 10=4.82%, 20=94.99% 00:14:31.915 cpu : usr=4.09%, sys=15.05%, ctx=226, majf=0, minf=15 00:14:31.915 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:14:31.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:31.915 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:31.915 issued rwts: total=5120,5477,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:31.915 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:31.915 job1: (groupid=0, jobs=1): err= 0: pid=81258: Mon Apr 15 16:07:01 2024 00:14:31.915 read: IOPS=2534, BW=9.90MiB/s (10.4MB/s)(10.0MiB/1010msec) 00:14:31.915 slat (usec): min=4, max=16041, avg=206.97, stdev=907.86 00:14:31.915 clat (usec): min=13622, max=41788, avg=26444.88, stdev=3799.71 00:14:31.915 lat (usec): min=13634, max=42161, avg=26651.85, stdev=3841.97 00:14:31.915 clat percentiles (usec): 00:14:31.915 | 1.00th=[16319], 5.00th=[20055], 10.00th=[22676], 20.00th=[23987], 00:14:31.915 | 30.00th=[25035], 40.00th=[25822], 50.00th=[26084], 60.00th=[26346], 00:14:31.915 | 70.00th=[26870], 80.00th=[28967], 90.00th=[32375], 95.00th=[33424], 00:14:31.915 | 99.00th=[36439], 99.50th=[37487], 99.90th=[41157], 99.95th=[41157], 00:14:31.915 | 99.99th=[41681] 00:14:31.915 write: IOPS=2682, BW=10.5MiB/s (11.0MB/s)(10.6MiB/1010msec); 0 zone resets 00:14:31.915 slat (usec): min=6, max=9334, avg=167.31, stdev=758.96 00:14:31.915 clat (usec): min=1442, max=36682, avg=22290.69, stdev=4392.38 00:14:31.915 lat (usec): min=1455, max=37368, avg=22458.01, stdev=4375.33 00:14:31.915 clat percentiles (usec): 00:14:31.915 | 1.00th=[12649], 5.00th=[16319], 10.00th=[17433], 20.00th=[18744], 00:14:31.915 | 30.00th=[19268], 40.00th=[20317], 50.00th=[22152], 60.00th=[23725], 00:14:31.915 | 70.00th=[25035], 80.00th=[25822], 90.00th=[27132], 95.00th=[29230], 00:14:31.915 | 99.00th=[34341], 99.50th=[35390], 99.90th=[36439], 99.95th=[36439], 00:14:31.915 | 99.99th=[36439] 00:14:31.915 bw ( KiB/s): min= 8424, max=12232, per=16.88%, avg=10328.00, stdev=2692.66, samples=2 00:14:31.915 iops : min= 2106, max= 3058, avg=2582.00, stdev=673.17, samples=2 00:14:31.915 lat (msec) : 2=0.08%, 10=0.04%, 20=21.24%, 50=78.65% 00:14:31.915 cpu : usr=1.78%, sys=4.76%, ctx=419, majf=0, minf=9 00:14:31.915 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:14:31.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:31.915 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:31.915 issued rwts: total=2560,2709,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:31.915 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:31.915 job2: (groupid=0, jobs=1): err= 0: pid=81263: Mon Apr 15 16:07:01 2024 00:14:31.915 read: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec) 00:14:31.915 slat (usec): min=4, max=13025, avg=108.58, stdev=646.64 00:14:31.915 clat (usec): min=6902, max=24951, avg=14383.51, stdev=1824.63 00:14:31.915 lat (usec): min=6916, max=28684, avg=14492.10, stdev=1835.87 00:14:31.915 clat percentiles (usec): 00:14:31.915 | 1.00th=[ 8979], 5.00th=[12518], 10.00th=[13042], 20.00th=[13566], 00:14:31.915 | 30.00th=[13960], 40.00th=[14222], 50.00th=[14353], 60.00th=[14484], 00:14:31.915 | 70.00th=[14746], 80.00th=[15008], 90.00th=[15533], 95.00th=[17957], 00:14:31.915 | 99.00th=[21890], 99.50th=[23725], 99.90th=[25035], 99.95th=[25035], 00:14:31.915 | 99.99th=[25035] 00:14:31.915 write: IOPS=4670, BW=18.2MiB/s (19.1MB/s)(18.4MiB/1006msec); 0 zone resets 00:14:31.915 slat (usec): min=7, max=10323, avg=98.71, stdev=634.94 00:14:31.915 clat (usec): min=486, max=19259, avg=13023.43, stdev=1412.33 00:14:31.915 lat (usec): min=6294, max=19296, avg=13122.14, stdev=1306.25 00:14:31.915 clat percentiles (usec): 00:14:31.915 | 1.00th=[ 7373], 5.00th=[10814], 10.00th=[11731], 20.00th=[12256], 00:14:31.915 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13173], 60.00th=[13435], 00:14:31.915 | 70.00th=[13566], 80.00th=[13829], 90.00th=[14222], 95.00th=[14484], 00:14:31.915 | 99.00th=[16712], 99.50th=[17695], 99.90th=[18744], 99.95th=[18744], 00:14:31.915 | 99.99th=[19268] 00:14:31.915 bw ( KiB/s): min=16392, max=20513, per=30.17%, avg=18452.50, stdev=2913.99, samples=2 00:14:31.915 iops : min= 4098, max= 5128, avg=4613.00, stdev=728.32, samples=2 00:14:31.915 lat (usec) : 500=0.01% 00:14:31.915 lat (msec) : 10=3.27%, 20=96.06%, 50=0.67% 00:14:31.915 cpu : usr=4.18%, sys=12.74%, ctx=264, majf=0, minf=7 00:14:31.915 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:14:31.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:31.915 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:31.915 issued rwts: total=4608,4699,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:31.915 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:31.915 job3: (groupid=0, jobs=1): err= 0: pid=81264: Mon Apr 15 16:07:01 2024 00:14:31.915 read: IOPS=2134, BW=8540KiB/s (8745kB/s)(8608KiB/1008msec) 00:14:31.915 slat (usec): min=4, max=11155, avg=234.00, stdev=1004.95 00:14:31.915 clat (usec): min=7593, max=46531, avg=28639.59, stdev=4821.52 00:14:31.915 lat (usec): min=7607, max=46545, avg=28873.60, stdev=4878.48 00:14:31.915 clat percentiles (usec): 00:14:31.915 | 1.00th=[10814], 5.00th=[21103], 10.00th=[24249], 20.00th=[26084], 00:14:31.915 | 30.00th=[26608], 40.00th=[27395], 50.00th=[27919], 60.00th=[29230], 00:14:31.915 | 70.00th=[30540], 80.00th=[32375], 90.00th=[35390], 95.00th=[36963], 00:14:31.915 | 99.00th=[38011], 99.50th=[38536], 99.90th=[42206], 99.95th=[45876], 00:14:31.915 | 99.99th=[46400] 00:14:31.915 write: IOPS=2539, BW=9.92MiB/s (10.4MB/s)(10.0MiB/1008msec); 0 zone resets 00:14:31.915 slat (usec): min=6, max=8656, avg=188.15, stdev=795.64 00:14:31.915 clat (usec): min=13744, max=36577, avg=25189.00, stdev=3754.73 00:14:31.915 lat (usec): min=14178, max=38256, avg=25377.15, stdev=3792.30 00:14:31.915 clat percentiles (usec): 00:14:31.915 | 1.00th=[15533], 5.00th=[17433], 10.00th=[19530], 20.00th=[22938], 00:14:31.915 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25560], 60.00th=[26084], 00:14:31.915 | 70.00th=[26346], 80.00th=[27132], 90.00th=[29230], 95.00th=[32113], 00:14:31.915 | 99.00th=[35390], 99.50th=[36439], 99.90th=[36439], 99.95th=[36439], 00:14:31.915 | 99.99th=[36439] 00:14:31.915 bw ( KiB/s): min=10000, max=10296, per=16.59%, avg=10148.00, stdev=209.30, samples=2 00:14:31.915 iops : min= 2500, max= 2574, avg=2537.00, stdev=52.33, samples=2 00:14:31.915 lat (msec) : 10=0.30%, 20=7.05%, 50=92.66% 00:14:31.915 cpu : usr=1.69%, sys=4.27%, ctx=399, majf=0, minf=17 00:14:31.915 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:14:31.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:31.915 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:31.915 issued rwts: total=2152,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:31.915 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:31.915 00:14:31.915 Run status group 0 (all jobs): 00:14:31.915 READ: bw=55.8MiB/s (58.6MB/s), 8540KiB/s-19.9MiB/s (8745kB/s-20.9MB/s), io=56.4MiB (59.1MB), run=1004-1010msec 00:14:31.915 WRITE: bw=59.7MiB/s (62.6MB/s), 9.92MiB/s-21.3MiB/s (10.4MB/s-22.3MB/s), io=60.3MiB (63.3MB), run=1004-1010msec 00:14:31.915 00:14:31.915 Disk stats (read/write): 00:14:31.915 nvme0n1: ios=4398/4608, merge=0/0, ticks=50344/49435, in_queue=99779, util=87.06% 00:14:31.915 nvme0n2: ios=2097/2449, merge=0/0, ticks=27365/25520, in_queue=52885, util=88.11% 00:14:31.915 nvme0n3: ios=3710/4096, merge=0/0, ticks=50023/50026, in_queue=100049, util=88.52% 00:14:31.915 nvme0n4: ios=1851/2048, merge=0/0, ticks=26453/24059, in_queue=50512, util=88.53% 00:14:31.915 16:07:01 -- target/fio.sh@55 -- # sync 00:14:31.915 16:07:01 -- target/fio.sh@59 -- # fio_pid=81277 00:14:31.915 16:07:01 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:14:31.915 16:07:01 -- target/fio.sh@61 -- # sleep 3 00:14:31.915 [global] 00:14:31.915 thread=1 00:14:31.915 invalidate=1 00:14:31.915 rw=read 00:14:31.915 time_based=1 00:14:31.915 runtime=10 00:14:31.915 ioengine=libaio 00:14:31.915 direct=1 00:14:31.915 bs=4096 00:14:31.915 iodepth=1 00:14:31.915 norandommap=1 00:14:31.915 numjobs=1 00:14:31.915 00:14:31.915 [job0] 00:14:31.915 filename=/dev/nvme0n1 00:14:31.915 [job1] 00:14:31.915 filename=/dev/nvme0n2 00:14:31.915 [job2] 00:14:31.915 filename=/dev/nvme0n3 00:14:31.915 [job3] 00:14:31.915 filename=/dev/nvme0n4 00:14:31.915 Could not set queue depth (nvme0n1) 00:14:31.915 Could not set queue depth (nvme0n2) 00:14:31.915 Could not set queue depth (nvme0n3) 00:14:31.916 Could not set queue depth (nvme0n4) 00:14:31.916 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:31.916 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:31.916 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:31.916 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:31.916 fio-3.35 00:14:31.916 Starting 4 threads 00:14:35.230 16:07:04 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:14:35.230 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=63463424, buflen=4096 00:14:35.230 fio: pid=81325, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:35.230 16:07:04 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:14:35.230 fio: pid=81324, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:35.230 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=71573504, buflen=4096 00:14:35.230 16:07:05 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:35.230 16:07:05 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:14:35.488 fio: pid=81322, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:35.488 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=52264960, buflen=4096 00:14:35.488 16:07:05 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:35.489 16:07:05 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:14:35.747 fio: pid=81323, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:35.747 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=58441728, buflen=4096 00:14:35.747 16:07:05 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:35.747 16:07:05 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:14:35.747 00:14:35.747 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=81322: Mon Apr 15 16:07:05 2024 00:14:35.747 read: IOPS=3766, BW=14.7MiB/s (15.4MB/s)(49.8MiB/3388msec) 00:14:35.747 slat (usec): min=7, max=12616, avg=13.52, stdev=192.07 00:14:35.747 clat (usec): min=109, max=4321, avg=251.08, stdev=50.33 00:14:35.747 lat (usec): min=118, max=12948, avg=264.61, stdev=199.79 00:14:35.747 clat percentiles (usec): 00:14:35.747 | 1.00th=[ 145], 5.00th=[ 194], 10.00th=[ 219], 20.00th=[ 235], 00:14:35.747 | 30.00th=[ 243], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 260], 00:14:35.747 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 281], 95.00th=[ 289], 00:14:35.747 | 99.00th=[ 310], 99.50th=[ 314], 99.90th=[ 359], 99.95th=[ 510], 00:14:35.747 | 99.99th=[ 2073] 00:14:35.747 bw ( KiB/s): min=14000, max=15328, per=22.43%, avg=14896.00, stdev=491.52, samples=6 00:14:35.747 iops : min= 3500, max= 3832, avg=3724.00, stdev=122.88, samples=6 00:14:35.747 lat (usec) : 250=42.51%, 500=57.42%, 750=0.02%, 1000=0.02% 00:14:35.747 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01% 00:14:35.747 cpu : usr=0.94%, sys=3.57%, ctx=12811, majf=0, minf=1 00:14:35.747 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:35.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:35.747 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:35.747 issued rwts: total=12761,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:35.747 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:35.747 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=81323: Mon Apr 15 16:07:05 2024 00:14:35.747 read: IOPS=3949, BW=15.4MiB/s (16.2MB/s)(55.7MiB/3613msec) 00:14:35.747 slat (usec): min=7, max=12807, avg=14.22, stdev=207.64 00:14:35.747 clat (usec): min=77, max=2007, avg=237.84, stdev=52.54 00:14:35.747 lat (usec): min=107, max=13052, avg=252.06, stdev=214.01 00:14:35.747 clat percentiles (usec): 00:14:35.747 | 1.00th=[ 113], 5.00th=[ 123], 10.00th=[ 137], 20.00th=[ 221], 00:14:35.747 | 30.00th=[ 237], 40.00th=[ 245], 50.00th=[ 251], 60.00th=[ 258], 00:14:35.747 | 70.00th=[ 265], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 289], 00:14:35.747 | 99.00th=[ 310], 99.50th=[ 322], 99.90th=[ 404], 99.95th=[ 635], 00:14:35.747 | 99.99th=[ 1319] 00:14:35.747 bw ( KiB/s): min=14216, max=20505, per=23.65%, avg=15707.57, stdev=2145.83, samples=7 00:14:35.747 iops : min= 3554, max= 5126, avg=3926.86, stdev=536.37, samples=7 00:14:35.747 lat (usec) : 100=0.01%, 250=48.34%, 500=51.57%, 750=0.04%, 1000=0.02% 00:14:35.747 lat (msec) : 2=0.01%, 4=0.01% 00:14:35.747 cpu : usr=0.75%, sys=3.88%, ctx=14280, majf=0, minf=1 00:14:35.747 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:35.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:35.747 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:35.747 issued rwts: total=14269,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:35.747 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:35.747 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=81324: Mon Apr 15 16:07:05 2024 00:14:35.748 read: IOPS=5478, BW=21.4MiB/s (22.4MB/s)(68.3MiB/3190msec) 00:14:35.748 slat (usec): min=7, max=12208, avg=11.03, stdev=109.75 00:14:35.748 clat (usec): min=124, max=3704, avg=170.54, stdev=35.26 00:14:35.748 lat (usec): min=139, max=12395, avg=181.56, stdev=115.40 00:14:35.748 clat percentiles (usec): 00:14:35.748 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 159], 00:14:35.748 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 172], 00:14:35.748 | 70.00th=[ 176], 80.00th=[ 180], 90.00th=[ 188], 95.00th=[ 196], 00:14:35.748 | 99.00th=[ 210], 99.50th=[ 219], 99.90th=[ 260], 99.95th=[ 453], 00:14:35.748 | 99.99th=[ 2089] 00:14:35.748 bw ( KiB/s): min=20920, max=22680, per=32.98%, avg=21906.67, stdev=726.10, samples=6 00:14:35.748 iops : min= 5230, max= 5670, avg=5476.67, stdev=181.53, samples=6 00:14:35.748 lat (usec) : 250=99.88%, 500=0.07%, 750=0.01%, 1000=0.02% 00:14:35.748 lat (msec) : 4=0.01% 00:14:35.748 cpu : usr=1.22%, sys=5.14%, ctx=17488, majf=0, minf=1 00:14:35.748 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:35.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:35.748 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:35.748 issued rwts: total=17475,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:35.748 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:35.748 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=81325: Mon Apr 15 16:07:05 2024 00:14:35.748 read: IOPS=5301, BW=20.7MiB/s (21.7MB/s)(60.5MiB/2923msec) 00:14:35.748 slat (nsec): min=8037, max=64735, avg=10477.19, stdev=3141.70 00:14:35.748 clat (usec): min=140, max=6933, avg=177.31, stdev=102.84 00:14:35.748 lat (usec): min=150, max=6942, avg=187.79, stdev=103.04 00:14:35.748 clat percentiles (usec): 00:14:35.748 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 163], 00:14:35.748 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 176], 00:14:35.748 | 70.00th=[ 180], 80.00th=[ 186], 90.00th=[ 194], 95.00th=[ 202], 00:14:35.748 | 99.00th=[ 221], 99.50th=[ 237], 99.90th=[ 404], 99.95th=[ 758], 00:14:35.748 | 99.99th=[ 6849] 00:14:35.748 bw ( KiB/s): min=21056, max=21784, per=32.08%, avg=21305.60, stdev=284.67, samples=5 00:14:35.748 iops : min= 5264, max= 5446, avg=5326.40, stdev=71.17, samples=5 00:14:35.748 lat (usec) : 250=99.59%, 500=0.33%, 750=0.03%, 1000=0.01% 00:14:35.748 lat (msec) : 2=0.01%, 4=0.01%, 10=0.03% 00:14:35.748 cpu : usr=1.23%, sys=5.00%, ctx=15495, majf=0, minf=2 00:14:35.748 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:35.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:35.748 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:35.748 issued rwts: total=15495,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:35.748 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:35.748 00:14:35.748 Run status group 0 (all jobs): 00:14:35.748 READ: bw=64.9MiB/s (68.0MB/s), 14.7MiB/s-21.4MiB/s (15.4MB/s-22.4MB/s), io=234MiB (246MB), run=2923-3613msec 00:14:35.748 00:14:35.748 Disk stats (read/write): 00:14:35.748 nvme0n1: ios=12612/0, merge=0/0, ticks=3172/0, in_queue=3172, util=94.88% 00:14:35.748 nvme0n2: ios=14262/0, merge=0/0, ticks=3398/0, in_queue=3398, util=95.15% 00:14:35.748 nvme0n3: ios=17010/0, merge=0/0, ticks=2919/0, in_queue=2919, util=96.20% 00:14:35.748 nvme0n4: ios=15147/0, merge=0/0, ticks=2686/0, in_queue=2686, util=96.00% 00:14:36.006 16:07:05 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:36.006 16:07:05 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:14:36.006 16:07:05 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:36.006 16:07:05 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:14:36.264 16:07:06 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:36.264 16:07:06 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:14:36.522 16:07:06 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:36.522 16:07:06 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:14:36.780 16:07:06 -- target/fio.sh@69 -- # fio_status=0 00:14:36.780 16:07:06 -- target/fio.sh@70 -- # wait 81277 00:14:36.780 16:07:06 -- target/fio.sh@70 -- # fio_status=4 00:14:36.780 16:07:06 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:36.780 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:36.780 16:07:06 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:36.780 16:07:06 -- common/autotest_common.sh@1205 -- # local i=0 00:14:36.780 16:07:06 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:36.780 16:07:06 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:14:36.780 16:07:06 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:36.780 16:07:06 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:14:36.780 16:07:06 -- common/autotest_common.sh@1217 -- # return 0 00:14:36.780 16:07:06 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:14:36.780 16:07:06 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:14:36.780 nvmf hotplug test: fio failed as expected 00:14:36.780 16:07:06 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:37.038 16:07:06 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:14:37.038 16:07:06 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:14:37.038 16:07:06 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:14:37.038 16:07:06 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:14:37.038 16:07:06 -- target/fio.sh@91 -- # nvmftestfini 00:14:37.038 16:07:06 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:37.038 16:07:06 -- nvmf/common.sh@117 -- # sync 00:14:37.038 16:07:06 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:37.038 16:07:06 -- nvmf/common.sh@120 -- # set +e 00:14:37.038 16:07:06 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:37.038 16:07:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:37.038 rmmod nvme_tcp 00:14:37.038 rmmod nvme_fabrics 00:14:37.038 16:07:06 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:37.038 16:07:06 -- nvmf/common.sh@124 -- # set -e 00:14:37.038 16:07:06 -- nvmf/common.sh@125 -- # return 0 00:14:37.038 16:07:06 -- nvmf/common.sh@478 -- # '[' -n 80903 ']' 00:14:37.038 16:07:06 -- nvmf/common.sh@479 -- # killprocess 80903 00:14:37.038 16:07:06 -- common/autotest_common.sh@936 -- # '[' -z 80903 ']' 00:14:37.038 16:07:06 -- common/autotest_common.sh@940 -- # kill -0 80903 00:14:37.038 16:07:06 -- common/autotest_common.sh@941 -- # uname 00:14:37.038 16:07:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:37.038 16:07:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80903 00:14:37.038 16:07:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:37.038 16:07:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:37.038 16:07:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80903' 00:14:37.038 killing process with pid 80903 00:14:37.038 16:07:06 -- common/autotest_common.sh@955 -- # kill 80903 00:14:37.038 16:07:06 -- common/autotest_common.sh@960 -- # wait 80903 00:14:37.296 16:07:07 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:37.296 16:07:07 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:37.296 16:07:07 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:37.296 16:07:07 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:37.296 16:07:07 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:37.296 16:07:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:37.296 16:07:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:37.296 16:07:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:37.296 16:07:07 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:37.296 00:14:37.296 real 0m18.439s 00:14:37.296 user 1m8.499s 00:14:37.296 sys 0m10.346s 00:14:37.296 16:07:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:37.296 16:07:07 -- common/autotest_common.sh@10 -- # set +x 00:14:37.296 ************************************ 00:14:37.296 END TEST nvmf_fio_target 00:14:37.296 ************************************ 00:14:37.555 16:07:07 -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:37.555 16:07:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:37.555 16:07:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:37.555 16:07:07 -- common/autotest_common.sh@10 -- # set +x 00:14:37.555 ************************************ 00:14:37.555 START TEST nvmf_bdevio 00:14:37.555 ************************************ 00:14:37.555 16:07:07 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:37.555 * Looking for test storage... 00:14:37.555 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:37.555 16:07:07 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:37.555 16:07:07 -- nvmf/common.sh@7 -- # uname -s 00:14:37.555 16:07:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:37.555 16:07:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:37.555 16:07:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:37.555 16:07:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:37.555 16:07:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:37.555 16:07:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:37.555 16:07:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:37.555 16:07:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:37.555 16:07:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:37.555 16:07:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:37.555 16:07:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5cf30adc-e0ee-4255-9853-178d7d30983a 00:14:37.555 16:07:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=5cf30adc-e0ee-4255-9853-178d7d30983a 00:14:37.555 16:07:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:37.555 16:07:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:37.555 16:07:07 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:37.555 16:07:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:37.555 16:07:07 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:37.555 16:07:07 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:37.555 16:07:07 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:37.555 16:07:07 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:37.555 16:07:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.555 16:07:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.555 16:07:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.555 16:07:07 -- paths/export.sh@5 -- # export PATH 00:14:37.555 16:07:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.555 16:07:07 -- nvmf/common.sh@47 -- # : 0 00:14:37.555 16:07:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:37.555 16:07:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:37.555 16:07:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:37.555 16:07:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:37.555 16:07:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:37.555 16:07:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:37.555 16:07:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:37.555 16:07:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:37.555 16:07:07 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:37.555 16:07:07 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:37.555 16:07:07 -- target/bdevio.sh@14 -- # nvmftestinit 00:14:37.555 16:07:07 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:37.555 16:07:07 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:37.555 16:07:07 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:37.555 16:07:07 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:37.555 16:07:07 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:37.555 16:07:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:37.555 16:07:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:37.555 16:07:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:37.555 16:07:07 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:14:37.555 16:07:07 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:14:37.555 16:07:07 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:14:37.555 16:07:07 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:14:37.555 16:07:07 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:14:37.555 16:07:07 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:14:37.555 16:07:07 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:37.555 16:07:07 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:37.555 16:07:07 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:37.555 16:07:07 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:37.555 16:07:07 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:37.555 16:07:07 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:37.555 16:07:07 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:37.555 16:07:07 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:37.555 16:07:07 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:37.555 16:07:07 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:37.555 16:07:07 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:37.555 16:07:07 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:37.555 16:07:07 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:37.555 16:07:07 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:37.813 Cannot find device "nvmf_tgt_br" 00:14:37.813 16:07:07 -- nvmf/common.sh@155 -- # true 00:14:37.813 16:07:07 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:37.813 Cannot find device "nvmf_tgt_br2" 00:14:37.813 16:07:07 -- nvmf/common.sh@156 -- # true 00:14:37.813 16:07:07 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:37.813 16:07:07 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:37.813 Cannot find device "nvmf_tgt_br" 00:14:37.813 16:07:07 -- nvmf/common.sh@158 -- # true 00:14:37.813 16:07:07 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:37.813 Cannot find device "nvmf_tgt_br2" 00:14:37.813 16:07:07 -- nvmf/common.sh@159 -- # true 00:14:37.813 16:07:07 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:37.813 16:07:07 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:37.813 16:07:07 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:37.813 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:37.813 16:07:07 -- nvmf/common.sh@162 -- # true 00:14:37.813 16:07:07 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:37.813 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:37.813 16:07:07 -- nvmf/common.sh@163 -- # true 00:14:37.814 16:07:07 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:37.814 16:07:07 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:37.814 16:07:07 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:37.814 16:07:07 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:37.814 16:07:07 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:37.814 16:07:07 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:37.814 16:07:07 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:37.814 16:07:07 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:37.814 16:07:07 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:37.814 16:07:07 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:37.814 16:07:07 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:37.814 16:07:07 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:37.814 16:07:07 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:37.814 16:07:07 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:38.072 16:07:07 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:38.072 16:07:07 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:38.072 16:07:07 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:38.072 16:07:07 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:38.072 16:07:07 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:38.072 16:07:07 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:38.072 16:07:07 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:38.072 16:07:07 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:38.072 16:07:07 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:38.072 16:07:07 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:38.072 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:38.072 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:14:38.072 00:14:38.072 --- 10.0.0.2 ping statistics --- 00:14:38.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.072 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:14:38.072 16:07:07 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:38.072 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:38.072 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:14:38.072 00:14:38.072 --- 10.0.0.3 ping statistics --- 00:14:38.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.072 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:14:38.072 16:07:07 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:38.072 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:38.072 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:14:38.072 00:14:38.072 --- 10.0.0.1 ping statistics --- 00:14:38.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.072 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:14:38.072 16:07:07 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:38.072 16:07:07 -- nvmf/common.sh@422 -- # return 0 00:14:38.072 16:07:07 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:38.072 16:07:07 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:38.072 16:07:07 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:38.072 16:07:07 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:38.072 16:07:07 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:38.072 16:07:07 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:38.072 16:07:07 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:38.072 16:07:07 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:38.072 16:07:07 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:38.072 16:07:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:38.072 16:07:07 -- common/autotest_common.sh@10 -- # set +x 00:14:38.072 16:07:07 -- nvmf/common.sh@470 -- # nvmfpid=81588 00:14:38.072 16:07:07 -- nvmf/common.sh@471 -- # waitforlisten 81588 00:14:38.072 16:07:07 -- common/autotest_common.sh@817 -- # '[' -z 81588 ']' 00:14:38.072 16:07:07 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:14:38.072 16:07:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.072 16:07:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:38.072 16:07:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.072 16:07:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:38.072 16:07:07 -- common/autotest_common.sh@10 -- # set +x 00:14:38.072 [2024-04-15 16:07:07.949730] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:14:38.072 [2024-04-15 16:07:07.949988] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:38.329 [2024-04-15 16:07:08.090317] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:38.329 [2024-04-15 16:07:08.145294] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:38.329 [2024-04-15 16:07:08.145764] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:38.329 [2024-04-15 16:07:08.146355] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:38.329 [2024-04-15 16:07:08.147029] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:38.329 [2024-04-15 16:07:08.147303] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:38.330 [2024-04-15 16:07:08.147689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:38.330 [2024-04-15 16:07:08.150187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:14:38.330 [2024-04-15 16:07:08.150321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:14:38.330 [2024-04-15 16:07:08.150326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:39.261 16:07:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:39.261 16:07:08 -- common/autotest_common.sh@850 -- # return 0 00:14:39.261 16:07:08 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:39.261 16:07:08 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:39.261 16:07:08 -- common/autotest_common.sh@10 -- # set +x 00:14:39.261 16:07:08 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:39.261 16:07:08 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:39.261 16:07:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:39.261 16:07:08 -- common/autotest_common.sh@10 -- # set +x 00:14:39.261 [2024-04-15 16:07:08.955558] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:39.261 16:07:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:39.261 16:07:08 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:39.261 16:07:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:39.261 16:07:08 -- common/autotest_common.sh@10 -- # set +x 00:14:39.261 Malloc0 00:14:39.261 16:07:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:39.261 16:07:09 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:39.261 16:07:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:39.261 16:07:09 -- common/autotest_common.sh@10 -- # set +x 00:14:39.261 16:07:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:39.261 16:07:09 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:39.261 16:07:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:39.261 16:07:09 -- common/autotest_common.sh@10 -- # set +x 00:14:39.261 16:07:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:39.261 16:07:09 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:39.261 16:07:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:39.261 16:07:09 -- common/autotest_common.sh@10 -- # set +x 00:14:39.261 [2024-04-15 16:07:09.022080] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:39.261 16:07:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:39.261 16:07:09 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:14:39.261 16:07:09 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:39.261 16:07:09 -- nvmf/common.sh@521 -- # config=() 00:14:39.261 16:07:09 -- nvmf/common.sh@521 -- # local subsystem config 00:14:39.261 16:07:09 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:14:39.261 16:07:09 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:14:39.261 { 00:14:39.261 "params": { 00:14:39.261 "name": "Nvme$subsystem", 00:14:39.261 "trtype": "$TEST_TRANSPORT", 00:14:39.261 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:39.261 "adrfam": "ipv4", 00:14:39.261 "trsvcid": "$NVMF_PORT", 00:14:39.261 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:39.261 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:39.261 "hdgst": ${hdgst:-false}, 00:14:39.261 "ddgst": ${ddgst:-false} 00:14:39.261 }, 00:14:39.261 "method": "bdev_nvme_attach_controller" 00:14:39.261 } 00:14:39.261 EOF 00:14:39.261 )") 00:14:39.261 16:07:09 -- nvmf/common.sh@543 -- # cat 00:14:39.261 16:07:09 -- nvmf/common.sh@545 -- # jq . 00:14:39.262 16:07:09 -- nvmf/common.sh@546 -- # IFS=, 00:14:39.262 16:07:09 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:14:39.262 "params": { 00:14:39.262 "name": "Nvme1", 00:14:39.262 "trtype": "tcp", 00:14:39.262 "traddr": "10.0.0.2", 00:14:39.262 "adrfam": "ipv4", 00:14:39.262 "trsvcid": "4420", 00:14:39.262 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:39.262 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:39.262 "hdgst": false, 00:14:39.262 "ddgst": false 00:14:39.262 }, 00:14:39.262 "method": "bdev_nvme_attach_controller" 00:14:39.262 }' 00:14:39.262 [2024-04-15 16:07:09.074370] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:14:39.262 [2024-04-15 16:07:09.074673] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81624 ] 00:14:39.262 [2024-04-15 16:07:09.218423] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:39.520 [2024-04-15 16:07:09.288192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:39.520 [2024-04-15 16:07:09.288260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:39.520 [2024-04-15 16:07:09.288248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:39.520 [2024-04-15 16:07:09.297504] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:14:39.520 I/O targets: 00:14:39.520 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:39.520 00:14:39.520 00:14:39.520 CUnit - A unit testing framework for C - Version 2.1-3 00:14:39.520 http://cunit.sourceforge.net/ 00:14:39.520 00:14:39.520 00:14:39.520 Suite: bdevio tests on: Nvme1n1 00:14:39.520 Test: blockdev write read block ...passed 00:14:39.520 Test: blockdev write zeroes read block ...passed 00:14:39.520 Test: blockdev write zeroes read no split ...passed 00:14:39.520 Test: blockdev write zeroes read split ...passed 00:14:39.520 Test: blockdev write zeroes read split partial ...passed 00:14:39.520 Test: blockdev reset ...[2024-04-15 16:07:09.484858] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:39.520 [2024-04-15 16:07:09.485186] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f9cf60 (9): Bad file descriptor 00:14:39.778 [2024-04-15 16:07:09.502538] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:39.778 passed 00:14:39.778 Test: blockdev write read 8 blocks ...passed 00:14:39.778 Test: blockdev write read size > 128k ...passed 00:14:39.778 Test: blockdev write read invalid size ...passed 00:14:39.778 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:39.778 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:39.778 Test: blockdev write read max offset ...passed 00:14:39.778 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:39.778 Test: blockdev writev readv 8 blocks ...passed 00:14:39.778 Test: blockdev writev readv 30 x 1block ...passed 00:14:39.778 Test: blockdev writev readv block ...passed 00:14:39.778 Test: blockdev writev readv size > 128k ...passed 00:14:39.778 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:39.778 Test: blockdev comparev and writev ...[2024-04-15 16:07:09.511855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:39.778 [2024-04-15 16:07:09.512041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:39.778 [2024-04-15 16:07:09.512230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:39.778 [2024-04-15 16:07:09.512398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:39.778 [2024-04-15 16:07:09.512858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:39.778 [2024-04-15 16:07:09.513007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:39.778 [2024-04-15 16:07:09.513197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:39.778 [2024-04-15 16:07:09.513355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:39.778 [2024-04-15 16:07:09.513815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:39.778 [2024-04-15 16:07:09.513951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:39.778 [2024-04-15 16:07:09.514083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:39.778 [2024-04-15 16:07:09.514161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:39.778 [2024-04-15 16:07:09.514564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:39.778 [2024-04-15 16:07:09.514703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:39.778 [2024-04-15 16:07:09.514818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:39.778 [2024-04-15 16:07:09.514923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:39.778 passed 00:14:39.778 Test: blockdev nvme passthru rw ...passed 00:14:39.778 Test: blockdev nvme passthru vendor specific ...[2024-04-15 16:07:09.515979] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:39.778 [2024-04-15 16:07:09.516114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:39.778 [2024-04-15 16:07:09.516304] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:39.778 [2024-04-15 16:07:09.516459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:39.778 [2024-04-15 16:07:09.516667] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:39.778 [2024-04-15 16:07:09.516781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:39.778 [2024-04-15 16:07:09.517038] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:39.778 [2024-04-15 16:07:09.517165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:39.778 passed 00:14:39.778 Test: blockdev nvme admin passthru ...passed 00:14:39.778 Test: blockdev copy ...passed 00:14:39.778 00:14:39.778 Run Summary: Type Total Ran Passed Failed Inactive 00:14:39.778 suites 1 1 n/a 0 0 00:14:39.778 tests 23 23 23 0 0 00:14:39.778 asserts 152 152 152 0 n/a 00:14:39.778 00:14:39.778 Elapsed time = 0.161 seconds 00:14:39.778 16:07:09 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:39.778 16:07:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:39.778 16:07:09 -- common/autotest_common.sh@10 -- # set +x 00:14:39.778 16:07:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:39.778 16:07:09 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:39.778 16:07:09 -- target/bdevio.sh@30 -- # nvmftestfini 00:14:39.778 16:07:09 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:39.778 16:07:09 -- nvmf/common.sh@117 -- # sync 00:14:40.037 16:07:09 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:40.037 16:07:09 -- nvmf/common.sh@120 -- # set +e 00:14:40.037 16:07:09 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:40.037 16:07:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:40.037 rmmod nvme_tcp 00:14:40.037 rmmod nvme_fabrics 00:14:40.037 16:07:09 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:40.037 16:07:09 -- nvmf/common.sh@124 -- # set -e 00:14:40.037 16:07:09 -- nvmf/common.sh@125 -- # return 0 00:14:40.037 16:07:09 -- nvmf/common.sh@478 -- # '[' -n 81588 ']' 00:14:40.037 16:07:09 -- nvmf/common.sh@479 -- # killprocess 81588 00:14:40.037 16:07:09 -- common/autotest_common.sh@936 -- # '[' -z 81588 ']' 00:14:40.037 16:07:09 -- common/autotest_common.sh@940 -- # kill -0 81588 00:14:40.037 16:07:09 -- common/autotest_common.sh@941 -- # uname 00:14:40.037 16:07:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:40.037 16:07:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81588 00:14:40.037 16:07:09 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:14:40.037 16:07:09 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:14:40.037 killing process with pid 81588 00:14:40.037 16:07:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81588' 00:14:40.037 16:07:09 -- common/autotest_common.sh@955 -- # kill 81588 00:14:40.037 16:07:09 -- common/autotest_common.sh@960 -- # wait 81588 00:14:40.295 16:07:10 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:40.295 16:07:10 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:40.295 16:07:10 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:40.295 16:07:10 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:40.295 16:07:10 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:40.295 16:07:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:40.295 16:07:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:40.295 16:07:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:40.296 16:07:10 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:40.296 00:14:40.296 real 0m2.721s 00:14:40.296 user 0m8.655s 00:14:40.296 sys 0m0.805s 00:14:40.296 16:07:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:40.296 ************************************ 00:14:40.296 END TEST nvmf_bdevio 00:14:40.296 ************************************ 00:14:40.296 16:07:10 -- common/autotest_common.sh@10 -- # set +x 00:14:40.296 16:07:10 -- nvmf/nvmf.sh@58 -- # '[' tcp = tcp ']' 00:14:40.296 16:07:10 -- nvmf/nvmf.sh@59 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:40.296 16:07:10 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:14:40.296 16:07:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:40.296 16:07:10 -- common/autotest_common.sh@10 -- # set +x 00:14:40.296 ************************************ 00:14:40.296 START TEST nvmf_bdevio_no_huge 00:14:40.296 ************************************ 00:14:40.296 16:07:10 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:40.554 * Looking for test storage... 00:14:40.554 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:40.554 16:07:10 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:40.554 16:07:10 -- nvmf/common.sh@7 -- # uname -s 00:14:40.554 16:07:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:40.554 16:07:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:40.554 16:07:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:40.554 16:07:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:40.554 16:07:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:40.554 16:07:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:40.554 16:07:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:40.554 16:07:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:40.554 16:07:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:40.554 16:07:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:40.554 16:07:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5cf30adc-e0ee-4255-9853-178d7d30983a 00:14:40.554 16:07:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=5cf30adc-e0ee-4255-9853-178d7d30983a 00:14:40.554 16:07:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:40.554 16:07:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:40.554 16:07:10 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:40.554 16:07:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:40.554 16:07:10 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:40.554 16:07:10 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:40.554 16:07:10 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:40.554 16:07:10 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:40.554 16:07:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.554 16:07:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.554 16:07:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.554 16:07:10 -- paths/export.sh@5 -- # export PATH 00:14:40.554 16:07:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.554 16:07:10 -- nvmf/common.sh@47 -- # : 0 00:14:40.554 16:07:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:40.554 16:07:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:40.554 16:07:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:40.554 16:07:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:40.554 16:07:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:40.555 16:07:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:40.555 16:07:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:40.555 16:07:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:40.555 16:07:10 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:40.555 16:07:10 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:40.555 16:07:10 -- target/bdevio.sh@14 -- # nvmftestinit 00:14:40.555 16:07:10 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:40.555 16:07:10 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:40.555 16:07:10 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:40.555 16:07:10 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:40.555 16:07:10 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:40.555 16:07:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:40.555 16:07:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:40.555 16:07:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:40.555 16:07:10 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:14:40.555 16:07:10 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:14:40.555 16:07:10 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:14:40.555 16:07:10 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:14:40.555 16:07:10 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:14:40.555 16:07:10 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:14:40.555 16:07:10 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:40.555 16:07:10 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:40.555 16:07:10 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:40.555 16:07:10 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:40.555 16:07:10 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:40.555 16:07:10 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:40.555 16:07:10 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:40.555 16:07:10 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:40.555 16:07:10 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:40.555 16:07:10 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:40.555 16:07:10 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:40.555 16:07:10 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:40.555 16:07:10 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:40.555 16:07:10 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:40.555 Cannot find device "nvmf_tgt_br" 00:14:40.555 16:07:10 -- nvmf/common.sh@155 -- # true 00:14:40.555 16:07:10 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:40.555 Cannot find device "nvmf_tgt_br2" 00:14:40.555 16:07:10 -- nvmf/common.sh@156 -- # true 00:14:40.555 16:07:10 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:40.555 16:07:10 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:40.555 Cannot find device "nvmf_tgt_br" 00:14:40.555 16:07:10 -- nvmf/common.sh@158 -- # true 00:14:40.555 16:07:10 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:40.555 Cannot find device "nvmf_tgt_br2" 00:14:40.555 16:07:10 -- nvmf/common.sh@159 -- # true 00:14:40.555 16:07:10 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:40.555 16:07:10 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:40.555 16:07:10 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:40.555 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:40.555 16:07:10 -- nvmf/common.sh@162 -- # true 00:14:40.555 16:07:10 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:40.555 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:40.555 16:07:10 -- nvmf/common.sh@163 -- # true 00:14:40.555 16:07:10 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:40.555 16:07:10 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:40.555 16:07:10 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:40.555 16:07:10 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:40.813 16:07:10 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:40.813 16:07:10 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:40.813 16:07:10 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:40.813 16:07:10 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:40.813 16:07:10 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:40.813 16:07:10 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:40.813 16:07:10 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:40.813 16:07:10 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:40.813 16:07:10 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:40.814 16:07:10 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:40.814 16:07:10 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:40.814 16:07:10 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:40.814 16:07:10 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:40.814 16:07:10 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:40.814 16:07:10 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:40.814 16:07:10 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:40.814 16:07:10 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:40.814 16:07:10 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:40.814 16:07:10 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:40.814 16:07:10 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:40.814 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:40.814 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:14:40.814 00:14:40.814 --- 10.0.0.2 ping statistics --- 00:14:40.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:40.814 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:14:40.814 16:07:10 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:40.814 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:40.814 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:14:40.814 00:14:40.814 --- 10.0.0.3 ping statistics --- 00:14:40.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:40.814 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:14:40.814 16:07:10 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:40.814 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:40.814 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:14:40.814 00:14:40.814 --- 10.0.0.1 ping statistics --- 00:14:40.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:40.814 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:14:40.814 16:07:10 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:40.814 16:07:10 -- nvmf/common.sh@422 -- # return 0 00:14:40.814 16:07:10 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:40.814 16:07:10 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:40.814 16:07:10 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:40.814 16:07:10 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:40.814 16:07:10 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:40.814 16:07:10 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:40.814 16:07:10 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:40.814 16:07:10 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:40.814 16:07:10 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:40.814 16:07:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:40.814 16:07:10 -- common/autotest_common.sh@10 -- # set +x 00:14:40.814 16:07:10 -- nvmf/common.sh@470 -- # nvmfpid=81810 00:14:40.814 16:07:10 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:14:40.814 16:07:10 -- nvmf/common.sh@471 -- # waitforlisten 81810 00:14:40.814 16:07:10 -- common/autotest_common.sh@817 -- # '[' -z 81810 ']' 00:14:40.814 16:07:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:40.814 16:07:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:40.814 16:07:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:40.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:40.814 16:07:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:40.814 16:07:10 -- common/autotest_common.sh@10 -- # set +x 00:14:41.071 [2024-04-15 16:07:10.825158] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:14:41.071 [2024-04-15 16:07:10.825258] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:14:41.071 [2024-04-15 16:07:10.990417] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:41.329 [2024-04-15 16:07:11.100313] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:41.329 [2024-04-15 16:07:11.100375] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:41.329 [2024-04-15 16:07:11.100391] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:41.329 [2024-04-15 16:07:11.100404] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:41.329 [2024-04-15 16:07:11.100415] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:41.329 [2024-04-15 16:07:11.100605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:41.329 [2024-04-15 16:07:11.100756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:14:41.329 [2024-04-15 16:07:11.101385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:14:41.329 [2024-04-15 16:07:11.101392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:42.265 16:07:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:42.265 16:07:11 -- common/autotest_common.sh@850 -- # return 0 00:14:42.265 16:07:11 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:42.265 16:07:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:42.265 16:07:11 -- common/autotest_common.sh@10 -- # set +x 00:14:42.265 16:07:11 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:42.265 16:07:11 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:42.265 16:07:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:42.265 16:07:11 -- common/autotest_common.sh@10 -- # set +x 00:14:42.265 [2024-04-15 16:07:11.912683] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:42.265 16:07:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:42.265 16:07:11 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:42.265 16:07:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:42.265 16:07:11 -- common/autotest_common.sh@10 -- # set +x 00:14:42.265 Malloc0 00:14:42.265 16:07:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:42.265 16:07:11 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:42.265 16:07:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:42.265 16:07:11 -- common/autotest_common.sh@10 -- # set +x 00:14:42.265 16:07:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:42.265 16:07:11 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:42.265 16:07:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:42.265 16:07:11 -- common/autotest_common.sh@10 -- # set +x 00:14:42.265 16:07:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:42.265 16:07:11 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:42.265 16:07:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:42.265 16:07:11 -- common/autotest_common.sh@10 -- # set +x 00:14:42.265 [2024-04-15 16:07:11.952856] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:42.265 16:07:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:42.265 16:07:11 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:14:42.265 16:07:11 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:42.265 16:07:11 -- nvmf/common.sh@521 -- # config=() 00:14:42.265 16:07:11 -- nvmf/common.sh@521 -- # local subsystem config 00:14:42.265 16:07:11 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:14:42.265 16:07:11 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:14:42.265 { 00:14:42.265 "params": { 00:14:42.265 "name": "Nvme$subsystem", 00:14:42.265 "trtype": "$TEST_TRANSPORT", 00:14:42.265 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:42.265 "adrfam": "ipv4", 00:14:42.265 "trsvcid": "$NVMF_PORT", 00:14:42.265 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:42.265 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:42.265 "hdgst": ${hdgst:-false}, 00:14:42.265 "ddgst": ${ddgst:-false} 00:14:42.265 }, 00:14:42.265 "method": "bdev_nvme_attach_controller" 00:14:42.265 } 00:14:42.265 EOF 00:14:42.265 )") 00:14:42.265 16:07:11 -- nvmf/common.sh@543 -- # cat 00:14:42.265 16:07:11 -- nvmf/common.sh@545 -- # jq . 00:14:42.265 16:07:11 -- nvmf/common.sh@546 -- # IFS=, 00:14:42.265 16:07:11 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:14:42.265 "params": { 00:14:42.265 "name": "Nvme1", 00:14:42.265 "trtype": "tcp", 00:14:42.265 "traddr": "10.0.0.2", 00:14:42.265 "adrfam": "ipv4", 00:14:42.265 "trsvcid": "4420", 00:14:42.265 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:42.265 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:42.265 "hdgst": false, 00:14:42.265 "ddgst": false 00:14:42.265 }, 00:14:42.265 "method": "bdev_nvme_attach_controller" 00:14:42.265 }' 00:14:42.265 [2024-04-15 16:07:12.005163] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:14:42.265 [2024-04-15 16:07:12.005259] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid81848 ] 00:14:42.265 [2024-04-15 16:07:12.161349] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:42.522 [2024-04-15 16:07:12.312141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:42.522 [2024-04-15 16:07:12.312302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:42.522 [2024-04-15 16:07:12.312530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.522 [2024-04-15 16:07:12.321528] app.c: 357:app_do_spdk_subsystem_init: *NOTICE*: RPC server not started 00:14:42.780 I/O targets: 00:14:42.780 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:42.780 00:14:42.780 00:14:42.780 CUnit - A unit testing framework for C - Version 2.1-3 00:14:42.780 http://cunit.sourceforge.net/ 00:14:42.780 00:14:42.780 00:14:42.780 Suite: bdevio tests on: Nvme1n1 00:14:42.780 Test: blockdev write read block ...passed 00:14:42.780 Test: blockdev write zeroes read block ...passed 00:14:42.780 Test: blockdev write zeroes read no split ...passed 00:14:42.780 Test: blockdev write zeroes read split ...passed 00:14:42.780 Test: blockdev write zeroes read split partial ...passed 00:14:42.780 Test: blockdev reset ...[2024-04-15 16:07:12.553783] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:42.780 [2024-04-15 16:07:12.554105] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc12340 (9): Bad file descriptor 00:14:42.780 [2024-04-15 16:07:12.564930] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:42.780 passed 00:14:42.780 Test: blockdev write read 8 blocks ...passed 00:14:42.780 Test: blockdev write read size > 128k ...passed 00:14:42.780 Test: blockdev write read invalid size ...passed 00:14:42.780 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:42.780 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:42.780 Test: blockdev write read max offset ...passed 00:14:42.780 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:42.780 Test: blockdev writev readv 8 blocks ...passed 00:14:42.780 Test: blockdev writev readv 30 x 1block ...passed 00:14:42.780 Test: blockdev writev readv block ...passed 00:14:42.780 Test: blockdev writev readv size > 128k ...passed 00:14:42.780 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:42.780 Test: blockdev comparev and writev ...[2024-04-15 16:07:12.572787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:42.780 [2024-04-15 16:07:12.572939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:42.780 [2024-04-15 16:07:12.573034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:42.780 [2024-04-15 16:07:12.573107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:42.780 [2024-04-15 16:07:12.573459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:42.780 [2024-04-15 16:07:12.573550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:42.780 [2024-04-15 16:07:12.573639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:42.780 [2024-04-15 16:07:12.573711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:42.780 [2024-04-15 16:07:12.574038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:42.780 [2024-04-15 16:07:12.574115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:42.780 [2024-04-15 16:07:12.574185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:42.780 [2024-04-15 16:07:12.574243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:42.780 [2024-04-15 16:07:12.574626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:42.780 [2024-04-15 16:07:12.574716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:42.780 [2024-04-15 16:07:12.574785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:42.781 [2024-04-15 16:07:12.574841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:42.781 passed 00:14:42.781 Test: blockdev nvme passthru rw ...passed 00:14:42.781 Test: blockdev nvme passthru vendor specific ...[2024-04-15 16:07:12.575709] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:42.781 [2024-04-15 16:07:12.575813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:42.781 [2024-04-15 16:07:12.575984] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:42.781 [2024-04-15 16:07:12.576062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:42.781 [2024-04-15 16:07:12.576214] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:42.781 [2024-04-15 16:07:12.576284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:42.781 [2024-04-15 16:07:12.576435] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:42.781 [2024-04-15 16:07:12.576505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:42.781 passed 00:14:42.781 Test: blockdev nvme admin passthru ...passed 00:14:42.781 Test: blockdev copy ...passed 00:14:42.781 00:14:42.781 Run Summary: Type Total Ran Passed Failed Inactive 00:14:42.781 suites 1 1 n/a 0 0 00:14:42.781 tests 23 23 23 0 0 00:14:42.781 asserts 152 152 152 0 n/a 00:14:42.781 00:14:42.781 Elapsed time = 0.181 seconds 00:14:43.042 16:07:12 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:43.042 16:07:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:43.042 16:07:12 -- common/autotest_common.sh@10 -- # set +x 00:14:43.042 16:07:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:43.042 16:07:12 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:43.042 16:07:12 -- target/bdevio.sh@30 -- # nvmftestfini 00:14:43.042 16:07:12 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:43.042 16:07:12 -- nvmf/common.sh@117 -- # sync 00:14:43.300 16:07:13 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:43.301 16:07:13 -- nvmf/common.sh@120 -- # set +e 00:14:43.301 16:07:13 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:43.301 16:07:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:43.301 rmmod nvme_tcp 00:14:43.301 rmmod nvme_fabrics 00:14:43.301 16:07:13 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:43.301 16:07:13 -- nvmf/common.sh@124 -- # set -e 00:14:43.301 16:07:13 -- nvmf/common.sh@125 -- # return 0 00:14:43.301 16:07:13 -- nvmf/common.sh@478 -- # '[' -n 81810 ']' 00:14:43.301 16:07:13 -- nvmf/common.sh@479 -- # killprocess 81810 00:14:43.301 16:07:13 -- common/autotest_common.sh@936 -- # '[' -z 81810 ']' 00:14:43.301 16:07:13 -- common/autotest_common.sh@940 -- # kill -0 81810 00:14:43.301 16:07:13 -- common/autotest_common.sh@941 -- # uname 00:14:43.301 16:07:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:43.301 16:07:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81810 00:14:43.301 16:07:13 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:14:43.301 16:07:13 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:14:43.301 killing process with pid 81810 00:14:43.301 16:07:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81810' 00:14:43.301 16:07:13 -- common/autotest_common.sh@955 -- # kill 81810 00:14:43.301 16:07:13 -- common/autotest_common.sh@960 -- # wait 81810 00:14:43.559 16:07:13 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:43.559 16:07:13 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:43.559 16:07:13 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:43.559 16:07:13 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:43.559 16:07:13 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:43.559 16:07:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:43.559 16:07:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:43.559 16:07:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:43.559 16:07:13 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:43.559 00:14:43.560 real 0m3.321s 00:14:43.560 user 0m10.685s 00:14:43.560 sys 0m1.487s 00:14:43.560 16:07:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:43.560 16:07:13 -- common/autotest_common.sh@10 -- # set +x 00:14:43.560 ************************************ 00:14:43.560 END TEST nvmf_bdevio_no_huge 00:14:43.560 ************************************ 00:14:43.818 16:07:13 -- nvmf/nvmf.sh@60 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:43.818 16:07:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:43.818 16:07:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:43.818 16:07:13 -- common/autotest_common.sh@10 -- # set +x 00:14:43.818 ************************************ 00:14:43.818 START TEST nvmf_tls 00:14:43.818 ************************************ 00:14:43.818 16:07:13 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:43.818 * Looking for test storage... 00:14:43.818 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:43.818 16:07:13 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:43.818 16:07:13 -- nvmf/common.sh@7 -- # uname -s 00:14:43.818 16:07:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:43.818 16:07:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:43.818 16:07:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:43.818 16:07:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:43.818 16:07:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:43.818 16:07:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:43.819 16:07:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:43.819 16:07:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:43.819 16:07:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:43.819 16:07:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:43.819 16:07:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5cf30adc-e0ee-4255-9853-178d7d30983a 00:14:43.819 16:07:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=5cf30adc-e0ee-4255-9853-178d7d30983a 00:14:43.819 16:07:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:43.819 16:07:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:43.819 16:07:13 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:43.819 16:07:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:43.819 16:07:13 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:43.819 16:07:13 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:43.819 16:07:13 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:43.819 16:07:13 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:43.819 16:07:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.819 16:07:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.819 16:07:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.819 16:07:13 -- paths/export.sh@5 -- # export PATH 00:14:43.819 16:07:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.819 16:07:13 -- nvmf/common.sh@47 -- # : 0 00:14:43.819 16:07:13 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:43.819 16:07:13 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:43.819 16:07:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:43.819 16:07:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:43.819 16:07:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:43.819 16:07:13 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:43.819 16:07:13 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:43.819 16:07:13 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:43.819 16:07:13 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:43.819 16:07:13 -- target/tls.sh@62 -- # nvmftestinit 00:14:43.819 16:07:13 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:43.819 16:07:13 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:43.819 16:07:13 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:43.819 16:07:13 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:43.819 16:07:13 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:43.819 16:07:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:43.819 16:07:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:43.819 16:07:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:43.819 16:07:13 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:14:43.819 16:07:13 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:14:43.819 16:07:13 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:14:43.819 16:07:13 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:14:43.819 16:07:13 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:14:43.819 16:07:13 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:14:43.819 16:07:13 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:43.819 16:07:13 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:43.819 16:07:13 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:43.819 16:07:13 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:43.819 16:07:13 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:43.819 16:07:13 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:43.819 16:07:13 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:43.819 16:07:13 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:43.819 16:07:13 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:43.819 16:07:13 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:43.819 16:07:13 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:43.819 16:07:13 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:43.819 16:07:13 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:44.077 16:07:13 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:44.077 Cannot find device "nvmf_tgt_br" 00:14:44.077 16:07:13 -- nvmf/common.sh@155 -- # true 00:14:44.077 16:07:13 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:44.077 Cannot find device "nvmf_tgt_br2" 00:14:44.077 16:07:13 -- nvmf/common.sh@156 -- # true 00:14:44.077 16:07:13 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:44.077 16:07:13 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:44.077 Cannot find device "nvmf_tgt_br" 00:14:44.077 16:07:13 -- nvmf/common.sh@158 -- # true 00:14:44.077 16:07:13 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:44.077 Cannot find device "nvmf_tgt_br2" 00:14:44.077 16:07:13 -- nvmf/common.sh@159 -- # true 00:14:44.077 16:07:13 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:44.077 16:07:13 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:44.077 16:07:13 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:44.077 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:44.077 16:07:13 -- nvmf/common.sh@162 -- # true 00:14:44.077 16:07:13 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:44.077 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:44.077 16:07:13 -- nvmf/common.sh@163 -- # true 00:14:44.077 16:07:13 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:44.077 16:07:13 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:44.077 16:07:13 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:44.077 16:07:13 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:44.077 16:07:13 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:44.077 16:07:14 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:44.336 16:07:14 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:44.336 16:07:14 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:44.336 16:07:14 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:44.336 16:07:14 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:44.336 16:07:14 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:44.336 16:07:14 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:44.336 16:07:14 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:44.336 16:07:14 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:44.336 16:07:14 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:44.336 16:07:14 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:44.336 16:07:14 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:44.336 16:07:14 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:44.336 16:07:14 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:44.336 16:07:14 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:44.336 16:07:14 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:44.336 16:07:14 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:44.336 16:07:14 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:44.336 16:07:14 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:44.336 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:44.336 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:14:44.336 00:14:44.336 --- 10.0.0.2 ping statistics --- 00:14:44.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:44.336 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:14:44.336 16:07:14 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:44.336 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:44.336 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:14:44.336 00:14:44.336 --- 10.0.0.3 ping statistics --- 00:14:44.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:44.336 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:14:44.336 16:07:14 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:44.336 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:44.336 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:14:44.336 00:14:44.336 --- 10.0.0.1 ping statistics --- 00:14:44.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:44.336 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:14:44.336 16:07:14 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:44.336 16:07:14 -- nvmf/common.sh@422 -- # return 0 00:14:44.336 16:07:14 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:44.336 16:07:14 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:44.336 16:07:14 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:44.336 16:07:14 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:44.336 16:07:14 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:44.336 16:07:14 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:44.336 16:07:14 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:44.336 16:07:14 -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:14:44.336 16:07:14 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:44.336 16:07:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:44.336 16:07:14 -- common/autotest_common.sh@10 -- # set +x 00:14:44.336 16:07:14 -- nvmf/common.sh@470 -- # nvmfpid=82033 00:14:44.336 16:07:14 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:14:44.336 16:07:14 -- nvmf/common.sh@471 -- # waitforlisten 82033 00:14:44.336 16:07:14 -- common/autotest_common.sh@817 -- # '[' -z 82033 ']' 00:14:44.336 16:07:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:44.336 16:07:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:44.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:44.336 16:07:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:44.336 16:07:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:44.336 16:07:14 -- common/autotest_common.sh@10 -- # set +x 00:14:44.336 [2024-04-15 16:07:14.284732] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:14:44.336 [2024-04-15 16:07:14.284836] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:44.594 [2024-04-15 16:07:14.429671] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.594 [2024-04-15 16:07:14.479237] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:44.594 [2024-04-15 16:07:14.479285] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:44.594 [2024-04-15 16:07:14.479297] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:44.594 [2024-04-15 16:07:14.479306] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:44.594 [2024-04-15 16:07:14.479315] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:44.594 [2024-04-15 16:07:14.479344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:45.530 16:07:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:45.530 16:07:15 -- common/autotest_common.sh@850 -- # return 0 00:14:45.530 16:07:15 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:45.530 16:07:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:45.530 16:07:15 -- common/autotest_common.sh@10 -- # set +x 00:14:45.530 16:07:15 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:45.530 16:07:15 -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:14:45.530 16:07:15 -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:14:45.789 true 00:14:45.789 16:07:15 -- target/tls.sh@73 -- # jq -r .tls_version 00:14:45.789 16:07:15 -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:46.047 16:07:15 -- target/tls.sh@73 -- # version=0 00:14:46.047 16:07:15 -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:14:46.047 16:07:15 -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:46.305 16:07:16 -- target/tls.sh@81 -- # jq -r .tls_version 00:14:46.305 16:07:16 -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:46.563 16:07:16 -- target/tls.sh@81 -- # version=13 00:14:46.563 16:07:16 -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:14:46.563 16:07:16 -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:14:46.822 16:07:16 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:46.822 16:07:16 -- target/tls.sh@89 -- # jq -r .tls_version 00:14:47.080 16:07:16 -- target/tls.sh@89 -- # version=7 00:14:47.080 16:07:16 -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:14:47.080 16:07:16 -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:47.080 16:07:16 -- target/tls.sh@96 -- # jq -r .enable_ktls 00:14:47.338 16:07:17 -- target/tls.sh@96 -- # ktls=false 00:14:47.338 16:07:17 -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:14:47.338 16:07:17 -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:14:47.596 16:07:17 -- target/tls.sh@104 -- # jq -r .enable_ktls 00:14:47.596 16:07:17 -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:47.855 16:07:17 -- target/tls.sh@104 -- # ktls=true 00:14:47.855 16:07:17 -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:14:47.855 16:07:17 -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:14:48.113 16:07:17 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:48.113 16:07:17 -- target/tls.sh@112 -- # jq -r .enable_ktls 00:14:48.372 16:07:18 -- target/tls.sh@112 -- # ktls=false 00:14:48.372 16:07:18 -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:14:48.372 16:07:18 -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:14:48.372 16:07:18 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:14:48.372 16:07:18 -- nvmf/common.sh@691 -- # local prefix key digest 00:14:48.372 16:07:18 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:14:48.372 16:07:18 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:14:48.372 16:07:18 -- nvmf/common.sh@693 -- # digest=1 00:14:48.372 16:07:18 -- nvmf/common.sh@694 -- # python - 00:14:48.372 16:07:18 -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:48.372 16:07:18 -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:14:48.372 16:07:18 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:14:48.372 16:07:18 -- nvmf/common.sh@691 -- # local prefix key digest 00:14:48.372 16:07:18 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:14:48.372 16:07:18 -- nvmf/common.sh@693 -- # key=ffeeddccbbaa99887766554433221100 00:14:48.372 16:07:18 -- nvmf/common.sh@693 -- # digest=1 00:14:48.372 16:07:18 -- nvmf/common.sh@694 -- # python - 00:14:48.630 16:07:18 -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:48.630 16:07:18 -- target/tls.sh@121 -- # mktemp 00:14:48.630 16:07:18 -- target/tls.sh@121 -- # key_path=/tmp/tmp.fCnSqI4vdS 00:14:48.630 16:07:18 -- target/tls.sh@122 -- # mktemp 00:14:48.630 16:07:18 -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.bH8QyXnFOQ 00:14:48.630 16:07:18 -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:48.630 16:07:18 -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:48.630 16:07:18 -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.fCnSqI4vdS 00:14:48.630 16:07:18 -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.bH8QyXnFOQ 00:14:48.630 16:07:18 -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:48.892 16:07:18 -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:14:49.150 16:07:18 -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.fCnSqI4vdS 00:14:49.150 16:07:18 -- target/tls.sh@49 -- # local key=/tmp/tmp.fCnSqI4vdS 00:14:49.150 16:07:18 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:49.407 [2024-04-15 16:07:19.225287] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:49.407 16:07:19 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:49.715 16:07:19 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:49.715 [2024-04-15 16:07:19.625348] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:49.715 [2024-04-15 16:07:19.625603] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:49.716 16:07:19 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:49.973 malloc0 00:14:49.973 16:07:19 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:50.231 16:07:20 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.fCnSqI4vdS 00:14:50.490 [2024-04-15 16:07:20.202423] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:50.490 16:07:20 -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.fCnSqI4vdS 00:15:00.459 Initializing NVMe Controllers 00:15:00.459 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:00.459 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:00.459 Initialization complete. Launching workers. 00:15:00.459 ======================================================== 00:15:00.459 Latency(us) 00:15:00.459 Device Information : IOPS MiB/s Average min max 00:15:00.459 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13395.29 52.33 4778.37 1383.62 6106.78 00:15:00.459 ======================================================== 00:15:00.459 Total : 13395.29 52.33 4778.37 1383.62 6106.78 00:15:00.459 00:15:00.459 16:07:30 -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fCnSqI4vdS 00:15:00.459 16:07:30 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:00.459 16:07:30 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:00.459 16:07:30 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:00.459 16:07:30 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.fCnSqI4vdS' 00:15:00.459 16:07:30 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:00.459 16:07:30 -- target/tls.sh@28 -- # bdevperf_pid=82270 00:15:00.459 16:07:30 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:00.459 16:07:30 -- target/tls.sh@31 -- # waitforlisten 82270 /var/tmp/bdevperf.sock 00:15:00.459 16:07:30 -- common/autotest_common.sh@817 -- # '[' -z 82270 ']' 00:15:00.459 16:07:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:00.459 16:07:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:00.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:00.459 16:07:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:00.459 16:07:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:00.459 16:07:30 -- common/autotest_common.sh@10 -- # set +x 00:15:00.459 16:07:30 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:00.718 [2024-04-15 16:07:30.457346] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:15:00.718 [2024-04-15 16:07:30.457445] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82270 ] 00:15:00.718 [2024-04-15 16:07:30.606027] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.718 [2024-04-15 16:07:30.660110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:01.652 16:07:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:01.652 16:07:31 -- common/autotest_common.sh@850 -- # return 0 00:15:01.652 16:07:31 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.fCnSqI4vdS 00:15:01.911 [2024-04-15 16:07:31.708668] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:01.911 [2024-04-15 16:07:31.708785] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:01.911 TLSTESTn1 00:15:01.911 16:07:31 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:02.169 Running I/O for 10 seconds... 00:15:12.135 00:15:12.135 Latency(us) 00:15:12.135 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:12.135 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:12.135 Verification LBA range: start 0x0 length 0x2000 00:15:12.135 TLSTESTn1 : 10.01 5330.78 20.82 0.00 0.00 23971.79 4337.86 18849.40 00:15:12.135 =================================================================================================================== 00:15:12.135 Total : 5330.78 20.82 0.00 0.00 23971.79 4337.86 18849.40 00:15:12.135 0 00:15:12.135 16:07:41 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:12.135 16:07:41 -- target/tls.sh@45 -- # killprocess 82270 00:15:12.135 16:07:41 -- common/autotest_common.sh@936 -- # '[' -z 82270 ']' 00:15:12.135 16:07:41 -- common/autotest_common.sh@940 -- # kill -0 82270 00:15:12.135 16:07:41 -- common/autotest_common.sh@941 -- # uname 00:15:12.135 16:07:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:12.135 16:07:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82270 00:15:12.135 killing process with pid 82270 00:15:12.135 Received shutdown signal, test time was about 10.000000 seconds 00:15:12.135 00:15:12.135 Latency(us) 00:15:12.135 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:12.135 =================================================================================================================== 00:15:12.135 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:12.135 16:07:41 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:15:12.135 16:07:41 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:15:12.135 16:07:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82270' 00:15:12.135 16:07:41 -- common/autotest_common.sh@955 -- # kill 82270 00:15:12.135 [2024-04-15 16:07:41.977400] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:12.135 16:07:41 -- common/autotest_common.sh@960 -- # wait 82270 00:15:12.394 16:07:42 -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.bH8QyXnFOQ 00:15:12.394 16:07:42 -- common/autotest_common.sh@638 -- # local es=0 00:15:12.394 16:07:42 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.bH8QyXnFOQ 00:15:12.394 16:07:42 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:15:12.394 16:07:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:12.394 16:07:42 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:15:12.394 16:07:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:12.394 16:07:42 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.bH8QyXnFOQ 00:15:12.394 16:07:42 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:12.394 16:07:42 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:12.394 16:07:42 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:12.394 16:07:42 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.bH8QyXnFOQ' 00:15:12.394 16:07:42 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:12.394 16:07:42 -- target/tls.sh@28 -- # bdevperf_pid=82398 00:15:12.394 16:07:42 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:12.394 16:07:42 -- target/tls.sh@31 -- # waitforlisten 82398 /var/tmp/bdevperf.sock 00:15:12.394 16:07:42 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:12.394 16:07:42 -- common/autotest_common.sh@817 -- # '[' -z 82398 ']' 00:15:12.394 16:07:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:12.394 16:07:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:12.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:12.394 16:07:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:12.394 16:07:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:12.394 16:07:42 -- common/autotest_common.sh@10 -- # set +x 00:15:12.394 [2024-04-15 16:07:42.220689] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:15:12.394 [2024-04-15 16:07:42.221011] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82398 ] 00:15:12.652 [2024-04-15 16:07:42.368537] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.652 [2024-04-15 16:07:42.421606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:12.652 16:07:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:12.652 16:07:42 -- common/autotest_common.sh@850 -- # return 0 00:15:12.652 16:07:42 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.bH8QyXnFOQ 00:15:12.911 [2024-04-15 16:07:42.767491] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:12.911 [2024-04-15 16:07:42.767638] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:12.911 [2024-04-15 16:07:42.778605] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:12.911 [2024-04-15 16:07:42.779277] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220f7e0 (107): Transport endpoint is not connected 00:15:12.911 [2024-04-15 16:07:42.780266] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220f7e0 (9): Bad file descriptor 00:15:12.911 [2024-04-15 16:07:42.781264] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:15:12.911 [2024-04-15 16:07:42.781286] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:15:12.911 [2024-04-15 16:07:42.781301] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:12.911 request: 00:15:12.911 { 00:15:12.911 "name": "TLSTEST", 00:15:12.911 "trtype": "tcp", 00:15:12.911 "traddr": "10.0.0.2", 00:15:12.911 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:12.911 "adrfam": "ipv4", 00:15:12.911 "trsvcid": "4420", 00:15:12.911 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:12.911 "psk": "/tmp/tmp.bH8QyXnFOQ", 00:15:12.911 "method": "bdev_nvme_attach_controller", 00:15:12.911 "req_id": 1 00:15:12.911 } 00:15:12.911 Got JSON-RPC error response 00:15:12.911 response: 00:15:12.911 { 00:15:12.911 "code": -32602, 00:15:12.911 "message": "Invalid parameters" 00:15:12.911 } 00:15:12.911 16:07:42 -- target/tls.sh@36 -- # killprocess 82398 00:15:12.911 16:07:42 -- common/autotest_common.sh@936 -- # '[' -z 82398 ']' 00:15:12.911 16:07:42 -- common/autotest_common.sh@940 -- # kill -0 82398 00:15:12.911 16:07:42 -- common/autotest_common.sh@941 -- # uname 00:15:12.911 16:07:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:12.911 16:07:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82398 00:15:12.911 killing process with pid 82398 00:15:12.911 Received shutdown signal, test time was about 10.000000 seconds 00:15:12.911 00:15:12.911 Latency(us) 00:15:12.911 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:12.911 =================================================================================================================== 00:15:12.911 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:12.911 16:07:42 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:15:12.911 16:07:42 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:15:12.911 16:07:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82398' 00:15:12.911 16:07:42 -- common/autotest_common.sh@955 -- # kill 82398 00:15:12.911 [2024-04-15 16:07:42.836260] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:12.911 16:07:42 -- common/autotest_common.sh@960 -- # wait 82398 00:15:13.170 16:07:43 -- target/tls.sh@37 -- # return 1 00:15:13.170 16:07:43 -- common/autotest_common.sh@641 -- # es=1 00:15:13.170 16:07:43 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:13.170 16:07:43 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:13.170 16:07:43 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:13.170 16:07:43 -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.fCnSqI4vdS 00:15:13.170 16:07:43 -- common/autotest_common.sh@638 -- # local es=0 00:15:13.170 16:07:43 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.fCnSqI4vdS 00:15:13.170 16:07:43 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:15:13.170 16:07:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:13.170 16:07:43 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:15:13.170 16:07:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:13.170 16:07:43 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.fCnSqI4vdS 00:15:13.170 16:07:43 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:13.170 16:07:43 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:13.170 16:07:43 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:15:13.170 16:07:43 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.fCnSqI4vdS' 00:15:13.170 16:07:43 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:13.170 16:07:43 -- target/tls.sh@28 -- # bdevperf_pid=82418 00:15:13.170 16:07:43 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:13.170 16:07:43 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:13.170 16:07:43 -- target/tls.sh@31 -- # waitforlisten 82418 /var/tmp/bdevperf.sock 00:15:13.170 16:07:43 -- common/autotest_common.sh@817 -- # '[' -z 82418 ']' 00:15:13.170 16:07:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:13.170 16:07:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:13.170 16:07:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:13.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:13.170 16:07:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:13.170 16:07:43 -- common/autotest_common.sh@10 -- # set +x 00:15:13.170 [2024-04-15 16:07:43.064373] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:15:13.170 [2024-04-15 16:07:43.064475] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82418 ] 00:15:13.427 [2024-04-15 16:07:43.202867] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.427 [2024-04-15 16:07:43.249671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:13.427 16:07:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:13.427 16:07:43 -- common/autotest_common.sh@850 -- # return 0 00:15:13.427 16:07:43 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.fCnSqI4vdS 00:15:13.686 [2024-04-15 16:07:43.609471] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:13.686 [2024-04-15 16:07:43.609586] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:13.686 [2024-04-15 16:07:43.618627] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:15:13.686 [2024-04-15 16:07:43.618681] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:15:13.686 [2024-04-15 16:07:43.618727] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:13.686 [2024-04-15 16:07:43.618766] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec77e0 (107): Transport endpoint is not connected 00:15:13.686 [2024-04-15 16:07:43.619754] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec77e0 (9): Bad file descriptor 00:15:13.686 [2024-04-15 16:07:43.620753] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:15:13.686 [2024-04-15 16:07:43.620779] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:15:13.686 [2024-04-15 16:07:43.620796] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:13.686 request: 00:15:13.686 { 00:15:13.686 "name": "TLSTEST", 00:15:13.686 "trtype": "tcp", 00:15:13.686 "traddr": "10.0.0.2", 00:15:13.686 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:15:13.686 "adrfam": "ipv4", 00:15:13.686 "trsvcid": "4420", 00:15:13.686 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:13.686 "psk": "/tmp/tmp.fCnSqI4vdS", 00:15:13.686 "method": "bdev_nvme_attach_controller", 00:15:13.686 "req_id": 1 00:15:13.686 } 00:15:13.686 Got JSON-RPC error response 00:15:13.686 response: 00:15:13.686 { 00:15:13.686 "code": -32602, 00:15:13.686 "message": "Invalid parameters" 00:15:13.686 } 00:15:13.686 16:07:43 -- target/tls.sh@36 -- # killprocess 82418 00:15:13.686 16:07:43 -- common/autotest_common.sh@936 -- # '[' -z 82418 ']' 00:15:13.686 16:07:43 -- common/autotest_common.sh@940 -- # kill -0 82418 00:15:13.686 16:07:43 -- common/autotest_common.sh@941 -- # uname 00:15:13.686 16:07:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:13.686 16:07:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82418 00:15:13.944 killing process with pid 82418 00:15:13.944 Received shutdown signal, test time was about 10.000000 seconds 00:15:13.944 00:15:13.944 Latency(us) 00:15:13.944 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:13.944 =================================================================================================================== 00:15:13.944 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:13.944 16:07:43 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:15:13.944 16:07:43 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:15:13.944 16:07:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82418' 00:15:13.944 16:07:43 -- common/autotest_common.sh@955 -- # kill 82418 00:15:13.944 [2024-04-15 16:07:43.669781] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:13.944 16:07:43 -- common/autotest_common.sh@960 -- # wait 82418 00:15:13.944 16:07:43 -- target/tls.sh@37 -- # return 1 00:15:13.944 16:07:43 -- common/autotest_common.sh@641 -- # es=1 00:15:13.944 16:07:43 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:13.944 16:07:43 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:13.944 16:07:43 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:13.944 16:07:43 -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.fCnSqI4vdS 00:15:13.944 16:07:43 -- common/autotest_common.sh@638 -- # local es=0 00:15:13.944 16:07:43 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.fCnSqI4vdS 00:15:13.944 16:07:43 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:15:13.944 16:07:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:13.944 16:07:43 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:15:13.944 16:07:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:13.944 16:07:43 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.fCnSqI4vdS 00:15:13.944 16:07:43 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:13.944 16:07:43 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:15:13.944 16:07:43 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:13.944 16:07:43 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.fCnSqI4vdS' 00:15:13.944 16:07:43 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:13.944 16:07:43 -- target/tls.sh@28 -- # bdevperf_pid=82438 00:15:13.944 16:07:43 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:13.944 16:07:43 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:13.944 16:07:43 -- target/tls.sh@31 -- # waitforlisten 82438 /var/tmp/bdevperf.sock 00:15:13.944 16:07:43 -- common/autotest_common.sh@817 -- # '[' -z 82438 ']' 00:15:13.944 16:07:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:13.944 16:07:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:13.944 16:07:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:13.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:13.944 16:07:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:13.944 16:07:43 -- common/autotest_common.sh@10 -- # set +x 00:15:13.944 [2024-04-15 16:07:43.899468] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:15:13.944 [2024-04-15 16:07:43.899552] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82438 ] 00:15:14.202 [2024-04-15 16:07:44.028381] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.202 [2024-04-15 16:07:44.073450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:15.136 16:07:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:15.136 16:07:44 -- common/autotest_common.sh@850 -- # return 0 00:15:15.136 16:07:44 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.fCnSqI4vdS 00:15:15.395 [2024-04-15 16:07:45.107910] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:15.395 [2024-04-15 16:07:45.108040] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:15.395 [2024-04-15 16:07:45.118823] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:15:15.395 [2024-04-15 16:07:45.118864] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:15:15.395 [2024-04-15 16:07:45.118910] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:15.395 [2024-04-15 16:07:45.119433] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb397e0 (107): Transport endpoint is not connected 00:15:15.395 [2024-04-15 16:07:45.120421] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb397e0 (9): Bad file descriptor 00:15:15.395 [2024-04-15 16:07:45.121420] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:15:15.395 [2024-04-15 16:07:45.121443] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:15:15.395 [2024-04-15 16:07:45.121458] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:15:15.395 request: 00:15:15.395 { 00:15:15.395 "name": "TLSTEST", 00:15:15.395 "trtype": "tcp", 00:15:15.395 "traddr": "10.0.0.2", 00:15:15.395 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:15.395 "adrfam": "ipv4", 00:15:15.395 "trsvcid": "4420", 00:15:15.395 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:15:15.395 "psk": "/tmp/tmp.fCnSqI4vdS", 00:15:15.395 "method": "bdev_nvme_attach_controller", 00:15:15.395 "req_id": 1 00:15:15.395 } 00:15:15.395 Got JSON-RPC error response 00:15:15.395 response: 00:15:15.395 { 00:15:15.395 "code": -32602, 00:15:15.395 "message": "Invalid parameters" 00:15:15.395 } 00:15:15.395 16:07:45 -- target/tls.sh@36 -- # killprocess 82438 00:15:15.395 16:07:45 -- common/autotest_common.sh@936 -- # '[' -z 82438 ']' 00:15:15.395 16:07:45 -- common/autotest_common.sh@940 -- # kill -0 82438 00:15:15.395 16:07:45 -- common/autotest_common.sh@941 -- # uname 00:15:15.395 16:07:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:15.395 16:07:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82438 00:15:15.395 killing process with pid 82438 00:15:15.395 Received shutdown signal, test time was about 10.000000 seconds 00:15:15.395 00:15:15.395 Latency(us) 00:15:15.395 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:15.395 =================================================================================================================== 00:15:15.395 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:15.395 16:07:45 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:15:15.395 16:07:45 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:15:15.395 16:07:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82438' 00:15:15.395 16:07:45 -- common/autotest_common.sh@955 -- # kill 82438 00:15:15.395 [2024-04-15 16:07:45.168420] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:15.395 16:07:45 -- common/autotest_common.sh@960 -- # wait 82438 00:15:15.395 16:07:45 -- target/tls.sh@37 -- # return 1 00:15:15.395 16:07:45 -- common/autotest_common.sh@641 -- # es=1 00:15:15.395 16:07:45 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:15.395 16:07:45 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:15.395 16:07:45 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:15.395 16:07:45 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:15:15.395 16:07:45 -- common/autotest_common.sh@638 -- # local es=0 00:15:15.395 16:07:45 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:15:15.395 16:07:45 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:15:15.395 16:07:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:15.395 16:07:45 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:15:15.395 16:07:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:15.395 16:07:45 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:15:15.395 16:07:45 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:15.395 16:07:45 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:15.395 16:07:45 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:15.395 16:07:45 -- target/tls.sh@23 -- # psk= 00:15:15.395 16:07:45 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:15.395 16:07:45 -- target/tls.sh@28 -- # bdevperf_pid=82466 00:15:15.395 16:07:45 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:15.395 16:07:45 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:15.395 16:07:45 -- target/tls.sh@31 -- # waitforlisten 82466 /var/tmp/bdevperf.sock 00:15:15.395 16:07:45 -- common/autotest_common.sh@817 -- # '[' -z 82466 ']' 00:15:15.395 16:07:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:15.395 16:07:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:15.395 16:07:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:15.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:15.395 16:07:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:15.395 16:07:45 -- common/autotest_common.sh@10 -- # set +x 00:15:15.654 [2024-04-15 16:07:45.398111] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:15:15.654 [2024-04-15 16:07:45.398208] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82466 ] 00:15:15.654 [2024-04-15 16:07:45.529175] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.654 [2024-04-15 16:07:45.579105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:16.589 16:07:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:16.589 16:07:46 -- common/autotest_common.sh@850 -- # return 0 00:15:16.589 16:07:46 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:16.589 [2024-04-15 16:07:46.529053] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:16.589 [2024-04-15 16:07:46.530486] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc81020 (9): Bad file descriptor 00:15:16.589 [2024-04-15 16:07:46.531482] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:15:16.589 [2024-04-15 16:07:46.531504] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:15:16.589 [2024-04-15 16:07:46.531517] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:16.589 request: 00:15:16.589 { 00:15:16.589 "name": "TLSTEST", 00:15:16.589 "trtype": "tcp", 00:15:16.589 "traddr": "10.0.0.2", 00:15:16.589 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:16.589 "adrfam": "ipv4", 00:15:16.589 "trsvcid": "4420", 00:15:16.589 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:16.589 "method": "bdev_nvme_attach_controller", 00:15:16.589 "req_id": 1 00:15:16.589 } 00:15:16.589 Got JSON-RPC error response 00:15:16.589 response: 00:15:16.589 { 00:15:16.589 "code": -32602, 00:15:16.589 "message": "Invalid parameters" 00:15:16.589 } 00:15:16.847 16:07:46 -- target/tls.sh@36 -- # killprocess 82466 00:15:16.847 16:07:46 -- common/autotest_common.sh@936 -- # '[' -z 82466 ']' 00:15:16.847 16:07:46 -- common/autotest_common.sh@940 -- # kill -0 82466 00:15:16.847 16:07:46 -- common/autotest_common.sh@941 -- # uname 00:15:16.848 16:07:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:16.848 16:07:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82466 00:15:16.848 killing process with pid 82466 00:15:16.848 Received shutdown signal, test time was about 10.000000 seconds 00:15:16.848 00:15:16.848 Latency(us) 00:15:16.848 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:16.848 =================================================================================================================== 00:15:16.848 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:16.848 16:07:46 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:15:16.848 16:07:46 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:15:16.848 16:07:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82466' 00:15:16.848 16:07:46 -- common/autotest_common.sh@955 -- # kill 82466 00:15:16.848 16:07:46 -- common/autotest_common.sh@960 -- # wait 82466 00:15:16.848 16:07:46 -- target/tls.sh@37 -- # return 1 00:15:16.848 16:07:46 -- common/autotest_common.sh@641 -- # es=1 00:15:16.848 16:07:46 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:16.848 16:07:46 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:16.848 16:07:46 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:16.848 16:07:46 -- target/tls.sh@158 -- # killprocess 82033 00:15:16.848 16:07:46 -- common/autotest_common.sh@936 -- # '[' -z 82033 ']' 00:15:16.848 16:07:46 -- common/autotest_common.sh@940 -- # kill -0 82033 00:15:16.848 16:07:46 -- common/autotest_common.sh@941 -- # uname 00:15:16.848 16:07:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:16.848 16:07:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82033 00:15:16.848 16:07:46 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:16.848 16:07:46 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:16.848 16:07:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82033' 00:15:16.848 killing process with pid 82033 00:15:16.848 16:07:46 -- common/autotest_common.sh@955 -- # kill 82033 00:15:16.848 [2024-04-15 16:07:46.794897] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:16.848 16:07:46 -- common/autotest_common.sh@960 -- # wait 82033 00:15:17.106 16:07:46 -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:15:17.106 16:07:46 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:15:17.106 16:07:46 -- nvmf/common.sh@691 -- # local prefix key digest 00:15:17.106 16:07:46 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:15:17.106 16:07:46 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:15:17.106 16:07:46 -- nvmf/common.sh@693 -- # digest=2 00:15:17.106 16:07:46 -- nvmf/common.sh@694 -- # python - 00:15:17.106 16:07:47 -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:15:17.106 16:07:47 -- target/tls.sh@160 -- # mktemp 00:15:17.106 16:07:47 -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.Y8qtaTnaW9 00:15:17.106 16:07:47 -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:15:17.106 16:07:47 -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.Y8qtaTnaW9 00:15:17.106 16:07:47 -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:15:17.106 16:07:47 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:17.107 16:07:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:17.107 16:07:47 -- common/autotest_common.sh@10 -- # set +x 00:15:17.107 16:07:47 -- nvmf/common.sh@470 -- # nvmfpid=82503 00:15:17.107 16:07:47 -- nvmf/common.sh@471 -- # waitforlisten 82503 00:15:17.107 16:07:47 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:17.107 16:07:47 -- common/autotest_common.sh@817 -- # '[' -z 82503 ']' 00:15:17.107 16:07:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.107 16:07:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:17.107 16:07:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.107 16:07:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:17.107 16:07:47 -- common/autotest_common.sh@10 -- # set +x 00:15:17.364 [2024-04-15 16:07:47.098915] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:15:17.364 [2024-04-15 16:07:47.099001] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:17.364 [2024-04-15 16:07:47.232601] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.364 [2024-04-15 16:07:47.278065] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:17.364 [2024-04-15 16:07:47.278117] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:17.364 [2024-04-15 16:07:47.278127] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:17.364 [2024-04-15 16:07:47.278136] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:17.364 [2024-04-15 16:07:47.278143] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:17.364 [2024-04-15 16:07:47.278176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:18.299 16:07:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:18.299 16:07:48 -- common/autotest_common.sh@850 -- # return 0 00:15:18.299 16:07:48 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:18.299 16:07:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:18.299 16:07:48 -- common/autotest_common.sh@10 -- # set +x 00:15:18.299 16:07:48 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:18.299 16:07:48 -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.Y8qtaTnaW9 00:15:18.299 16:07:48 -- target/tls.sh@49 -- # local key=/tmp/tmp.Y8qtaTnaW9 00:15:18.299 16:07:48 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:18.558 [2024-04-15 16:07:48.374020] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:18.558 16:07:48 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:18.816 16:07:48 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:19.074 [2024-04-15 16:07:48.850096] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:19.074 [2024-04-15 16:07:48.850492] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:19.074 16:07:48 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:19.333 malloc0 00:15:19.333 16:07:49 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:19.620 16:07:49 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Y8qtaTnaW9 00:15:19.885 [2024-04-15 16:07:49.663143] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:19.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:19.885 16:07:49 -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Y8qtaTnaW9 00:15:19.885 16:07:49 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:19.885 16:07:49 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:19.885 16:07:49 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:19.885 16:07:49 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Y8qtaTnaW9' 00:15:19.885 16:07:49 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:19.885 16:07:49 -- target/tls.sh@28 -- # bdevperf_pid=82558 00:15:19.885 16:07:49 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:19.885 16:07:49 -- target/tls.sh@31 -- # waitforlisten 82558 /var/tmp/bdevperf.sock 00:15:19.885 16:07:49 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:19.885 16:07:49 -- common/autotest_common.sh@817 -- # '[' -z 82558 ']' 00:15:19.885 16:07:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:19.885 16:07:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:19.885 16:07:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:19.885 16:07:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:19.885 16:07:49 -- common/autotest_common.sh@10 -- # set +x 00:15:19.885 [2024-04-15 16:07:49.729643] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:15:19.885 [2024-04-15 16:07:49.729932] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82558 ] 00:15:20.143 [2024-04-15 16:07:49.873394] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.143 [2024-04-15 16:07:49.923947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:20.710 16:07:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:20.710 16:07:50 -- common/autotest_common.sh@850 -- # return 0 00:15:20.710 16:07:50 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Y8qtaTnaW9 00:15:20.968 [2024-04-15 16:07:50.887684] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:20.968 [2024-04-15 16:07:50.888015] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:21.226 TLSTESTn1 00:15:21.226 16:07:50 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:21.226 Running I/O for 10 seconds... 00:15:31.236 00:15:31.236 Latency(us) 00:15:31.236 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:31.236 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:31.236 Verification LBA range: start 0x0 length 0x2000 00:15:31.236 TLSTESTn1 : 10.01 5618.95 21.95 0.00 0.00 22744.01 4088.20 17850.76 00:15:31.236 =================================================================================================================== 00:15:31.236 Total : 5618.95 21.95 0.00 0.00 22744.01 4088.20 17850.76 00:15:31.236 0 00:15:31.236 16:08:01 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:31.236 16:08:01 -- target/tls.sh@45 -- # killprocess 82558 00:15:31.236 16:08:01 -- common/autotest_common.sh@936 -- # '[' -z 82558 ']' 00:15:31.236 16:08:01 -- common/autotest_common.sh@940 -- # kill -0 82558 00:15:31.236 16:08:01 -- common/autotest_common.sh@941 -- # uname 00:15:31.236 16:08:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:31.236 16:08:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82558 00:15:31.236 killing process with pid 82558 00:15:31.236 Received shutdown signal, test time was about 10.000000 seconds 00:15:31.236 00:15:31.236 Latency(us) 00:15:31.236 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:31.236 =================================================================================================================== 00:15:31.236 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:31.236 16:08:01 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:15:31.236 16:08:01 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:15:31.236 16:08:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82558' 00:15:31.236 16:08:01 -- common/autotest_common.sh@955 -- # kill 82558 00:15:31.236 [2024-04-15 16:08:01.165708] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:31.236 16:08:01 -- common/autotest_common.sh@960 -- # wait 82558 00:15:31.494 16:08:01 -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.Y8qtaTnaW9 00:15:31.494 16:08:01 -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Y8qtaTnaW9 00:15:31.494 16:08:01 -- common/autotest_common.sh@638 -- # local es=0 00:15:31.494 16:08:01 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Y8qtaTnaW9 00:15:31.494 16:08:01 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:15:31.494 16:08:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:31.494 16:08:01 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:15:31.494 16:08:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:31.494 16:08:01 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Y8qtaTnaW9 00:15:31.494 16:08:01 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:31.494 16:08:01 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:31.494 16:08:01 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:31.494 16:08:01 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Y8qtaTnaW9' 00:15:31.494 16:08:01 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:31.494 16:08:01 -- target/tls.sh@28 -- # bdevperf_pid=82692 00:15:31.494 16:08:01 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:31.494 16:08:01 -- target/tls.sh@31 -- # waitforlisten 82692 /var/tmp/bdevperf.sock 00:15:31.494 16:08:01 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:31.494 16:08:01 -- common/autotest_common.sh@817 -- # '[' -z 82692 ']' 00:15:31.494 16:08:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:31.494 16:08:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:31.494 16:08:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:31.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:31.494 16:08:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:31.494 16:08:01 -- common/autotest_common.sh@10 -- # set +x 00:15:31.494 [2024-04-15 16:08:01.399132] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:15:31.494 [2024-04-15 16:08:01.399366] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82692 ] 00:15:31.753 [2024-04-15 16:08:01.533933] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.753 [2024-04-15 16:08:01.581548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:31.753 16:08:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:31.753 16:08:01 -- common/autotest_common.sh@850 -- # return 0 00:15:31.753 16:08:01 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Y8qtaTnaW9 00:15:32.011 [2024-04-15 16:08:01.900119] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:32.011 [2024-04-15 16:08:01.900398] bdev_nvme.c:6046:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:15:32.011 [2024-04-15 16:08:01.900485] bdev_nvme.c:6155:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.Y8qtaTnaW9 00:15:32.011 request: 00:15:32.011 { 00:15:32.011 "name": "TLSTEST", 00:15:32.011 "trtype": "tcp", 00:15:32.011 "traddr": "10.0.0.2", 00:15:32.011 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:32.011 "adrfam": "ipv4", 00:15:32.011 "trsvcid": "4420", 00:15:32.011 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:32.011 "psk": "/tmp/tmp.Y8qtaTnaW9", 00:15:32.011 "method": "bdev_nvme_attach_controller", 00:15:32.011 "req_id": 1 00:15:32.011 } 00:15:32.011 Got JSON-RPC error response 00:15:32.011 response: 00:15:32.011 { 00:15:32.011 "code": -1, 00:15:32.011 "message": "Operation not permitted" 00:15:32.011 } 00:15:32.011 16:08:01 -- target/tls.sh@36 -- # killprocess 82692 00:15:32.011 16:08:01 -- common/autotest_common.sh@936 -- # '[' -z 82692 ']' 00:15:32.011 16:08:01 -- common/autotest_common.sh@940 -- # kill -0 82692 00:15:32.011 16:08:01 -- common/autotest_common.sh@941 -- # uname 00:15:32.011 16:08:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:32.011 16:08:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82692 00:15:32.011 16:08:01 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:15:32.011 16:08:01 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:15:32.011 16:08:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82692' 00:15:32.011 killing process with pid 82692 00:15:32.011 16:08:01 -- common/autotest_common.sh@955 -- # kill 82692 00:15:32.011 Received shutdown signal, test time was about 10.000000 seconds 00:15:32.011 00:15:32.011 Latency(us) 00:15:32.011 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:32.011 =================================================================================================================== 00:15:32.011 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:32.011 16:08:01 -- common/autotest_common.sh@960 -- # wait 82692 00:15:32.270 16:08:02 -- target/tls.sh@37 -- # return 1 00:15:32.270 16:08:02 -- common/autotest_common.sh@641 -- # es=1 00:15:32.270 16:08:02 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:32.270 16:08:02 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:32.270 16:08:02 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:32.270 16:08:02 -- target/tls.sh@174 -- # killprocess 82503 00:15:32.270 16:08:02 -- common/autotest_common.sh@936 -- # '[' -z 82503 ']' 00:15:32.270 16:08:02 -- common/autotest_common.sh@940 -- # kill -0 82503 00:15:32.270 16:08:02 -- common/autotest_common.sh@941 -- # uname 00:15:32.270 16:08:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:32.270 16:08:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82503 00:15:32.270 16:08:02 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:32.270 16:08:02 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:32.270 16:08:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82503' 00:15:32.270 killing process with pid 82503 00:15:32.270 16:08:02 -- common/autotest_common.sh@955 -- # kill 82503 00:15:32.270 [2024-04-15 16:08:02.159551] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:32.270 16:08:02 -- common/autotest_common.sh@960 -- # wait 82503 00:15:32.530 16:08:02 -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:15:32.530 16:08:02 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:32.530 16:08:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:32.530 16:08:02 -- common/autotest_common.sh@10 -- # set +x 00:15:32.530 16:08:02 -- nvmf/common.sh@470 -- # nvmfpid=82716 00:15:32.530 16:08:02 -- nvmf/common.sh@471 -- # waitforlisten 82716 00:15:32.530 16:08:02 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:32.530 16:08:02 -- common/autotest_common.sh@817 -- # '[' -z 82716 ']' 00:15:32.530 16:08:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.530 16:08:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:32.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.530 16:08:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.530 16:08:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:32.530 16:08:02 -- common/autotest_common.sh@10 -- # set +x 00:15:32.530 [2024-04-15 16:08:02.416804] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:15:32.530 [2024-04-15 16:08:02.417113] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:32.788 [2024-04-15 16:08:02.564331] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.788 [2024-04-15 16:08:02.611446] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:32.788 [2024-04-15 16:08:02.611696] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:32.788 [2024-04-15 16:08:02.611870] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:32.788 [2024-04-15 16:08:02.612035] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:32.788 [2024-04-15 16:08:02.612103] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:32.788 [2024-04-15 16:08:02.612209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:32.788 16:08:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:32.788 16:08:02 -- common/autotest_common.sh@850 -- # return 0 00:15:32.788 16:08:02 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:32.788 16:08:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:32.788 16:08:02 -- common/autotest_common.sh@10 -- # set +x 00:15:32.788 16:08:02 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:32.788 16:08:02 -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.Y8qtaTnaW9 00:15:32.788 16:08:02 -- common/autotest_common.sh@638 -- # local es=0 00:15:32.788 16:08:02 -- common/autotest_common.sh@640 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.Y8qtaTnaW9 00:15:32.788 16:08:02 -- common/autotest_common.sh@626 -- # local arg=setup_nvmf_tgt 00:15:32.788 16:08:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:32.788 16:08:02 -- common/autotest_common.sh@630 -- # type -t setup_nvmf_tgt 00:15:32.788 16:08:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:32.788 16:08:02 -- common/autotest_common.sh@641 -- # setup_nvmf_tgt /tmp/tmp.Y8qtaTnaW9 00:15:32.788 16:08:02 -- target/tls.sh@49 -- # local key=/tmp/tmp.Y8qtaTnaW9 00:15:32.788 16:08:02 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:33.354 [2024-04-15 16:08:03.047473] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:33.354 16:08:03 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:33.354 16:08:03 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:33.612 [2024-04-15 16:08:03.571559] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:33.612 [2024-04-15 16:08:03.571961] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:33.870 16:08:03 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:33.870 malloc0 00:15:34.128 16:08:03 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:34.128 16:08:04 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Y8qtaTnaW9 00:15:34.384 [2024-04-15 16:08:04.256659] tcp.c:3562:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:15:34.384 [2024-04-15 16:08:04.256920] tcp.c:3648:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:15:34.384 [2024-04-15 16:08:04.257085] subsystem.c: 967:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:15:34.384 request: 00:15:34.384 { 00:15:34.384 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:34.384 "host": "nqn.2016-06.io.spdk:host1", 00:15:34.384 "psk": "/tmp/tmp.Y8qtaTnaW9", 00:15:34.384 "method": "nvmf_subsystem_add_host", 00:15:34.384 "req_id": 1 00:15:34.384 } 00:15:34.384 Got JSON-RPC error response 00:15:34.384 response: 00:15:34.384 { 00:15:34.384 "code": -32603, 00:15:34.384 "message": "Internal error" 00:15:34.384 } 00:15:34.384 16:08:04 -- common/autotest_common.sh@641 -- # es=1 00:15:34.384 16:08:04 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:34.384 16:08:04 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:34.384 16:08:04 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:34.384 16:08:04 -- target/tls.sh@180 -- # killprocess 82716 00:15:34.384 16:08:04 -- common/autotest_common.sh@936 -- # '[' -z 82716 ']' 00:15:34.384 16:08:04 -- common/autotest_common.sh@940 -- # kill -0 82716 00:15:34.384 16:08:04 -- common/autotest_common.sh@941 -- # uname 00:15:34.384 16:08:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:34.384 16:08:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82716 00:15:34.384 16:08:04 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:34.384 16:08:04 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:34.384 16:08:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82716' 00:15:34.384 killing process with pid 82716 00:15:34.384 16:08:04 -- common/autotest_common.sh@955 -- # kill 82716 00:15:34.384 16:08:04 -- common/autotest_common.sh@960 -- # wait 82716 00:15:34.641 16:08:04 -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.Y8qtaTnaW9 00:15:34.641 16:08:04 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:15:34.641 16:08:04 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:34.641 16:08:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:34.641 16:08:04 -- common/autotest_common.sh@10 -- # set +x 00:15:34.641 16:08:04 -- nvmf/common.sh@470 -- # nvmfpid=82771 00:15:34.641 16:08:04 -- nvmf/common.sh@471 -- # waitforlisten 82771 00:15:34.641 16:08:04 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:34.641 16:08:04 -- common/autotest_common.sh@817 -- # '[' -z 82771 ']' 00:15:34.641 16:08:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:34.641 16:08:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:34.641 16:08:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:34.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:34.641 16:08:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:34.641 16:08:04 -- common/autotest_common.sh@10 -- # set +x 00:15:34.641 [2024-04-15 16:08:04.571677] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:15:34.641 [2024-04-15 16:08:04.571963] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:34.897 [2024-04-15 16:08:04.718663] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:34.897 [2024-04-15 16:08:04.771035] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:34.897 [2024-04-15 16:08:04.771285] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:34.897 [2024-04-15 16:08:04.771494] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:34.897 [2024-04-15 16:08:04.771668] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:34.897 [2024-04-15 16:08:04.771720] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:34.897 [2024-04-15 16:08:04.771843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:35.830 16:08:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:35.830 16:08:05 -- common/autotest_common.sh@850 -- # return 0 00:15:35.830 16:08:05 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:35.830 16:08:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:35.830 16:08:05 -- common/autotest_common.sh@10 -- # set +x 00:15:35.830 16:08:05 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:35.830 16:08:05 -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.Y8qtaTnaW9 00:15:35.830 16:08:05 -- target/tls.sh@49 -- # local key=/tmp/tmp.Y8qtaTnaW9 00:15:35.830 16:08:05 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:36.089 [2024-04-15 16:08:05.859318] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:36.089 16:08:05 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:36.347 16:08:06 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:36.605 [2024-04-15 16:08:06.327395] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:36.605 [2024-04-15 16:08:06.327848] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:36.605 16:08:06 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:36.605 malloc0 00:15:36.605 16:08:06 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:36.863 16:08:06 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Y8qtaTnaW9 00:15:37.121 [2024-04-15 16:08:07.060385] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:37.121 16:08:07 -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:37.121 16:08:07 -- target/tls.sh@188 -- # bdevperf_pid=82827 00:15:37.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:37.121 16:08:07 -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:37.121 16:08:07 -- target/tls.sh@191 -- # waitforlisten 82827 /var/tmp/bdevperf.sock 00:15:37.121 16:08:07 -- common/autotest_common.sh@817 -- # '[' -z 82827 ']' 00:15:37.121 16:08:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:37.121 16:08:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:37.121 16:08:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:37.121 16:08:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:37.121 16:08:07 -- common/autotest_common.sh@10 -- # set +x 00:15:37.404 [2024-04-15 16:08:07.131074] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:15:37.404 [2024-04-15 16:08:07.131647] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82827 ] 00:15:37.404 [2024-04-15 16:08:07.279359] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.404 [2024-04-15 16:08:07.324160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:38.338 16:08:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:38.338 16:08:08 -- common/autotest_common.sh@850 -- # return 0 00:15:38.338 16:08:08 -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Y8qtaTnaW9 00:15:38.338 [2024-04-15 16:08:08.240290] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:38.338 [2024-04-15 16:08:08.240633] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:38.596 TLSTESTn1 00:15:38.596 16:08:08 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:15:38.855 16:08:08 -- target/tls.sh@196 -- # tgtconf='{ 00:15:38.855 "subsystems": [ 00:15:38.855 { 00:15:38.855 "subsystem": "keyring", 00:15:38.855 "config": [] 00:15:38.855 }, 00:15:38.855 { 00:15:38.855 "subsystem": "iobuf", 00:15:38.855 "config": [ 00:15:38.855 { 00:15:38.855 "method": "iobuf_set_options", 00:15:38.855 "params": { 00:15:38.855 "small_pool_count": 8192, 00:15:38.855 "large_pool_count": 1024, 00:15:38.855 "small_bufsize": 8192, 00:15:38.855 "large_bufsize": 135168 00:15:38.855 } 00:15:38.855 } 00:15:38.855 ] 00:15:38.855 }, 00:15:38.855 { 00:15:38.855 "subsystem": "sock", 00:15:38.855 "config": [ 00:15:38.855 { 00:15:38.855 "method": "sock_impl_set_options", 00:15:38.855 "params": { 00:15:38.855 "impl_name": "uring", 00:15:38.855 "recv_buf_size": 2097152, 00:15:38.855 "send_buf_size": 2097152, 00:15:38.855 "enable_recv_pipe": true, 00:15:38.855 "enable_quickack": false, 00:15:38.855 "enable_placement_id": 0, 00:15:38.855 "enable_zerocopy_send_server": false, 00:15:38.855 "enable_zerocopy_send_client": false, 00:15:38.855 "zerocopy_threshold": 0, 00:15:38.855 "tls_version": 0, 00:15:38.855 "enable_ktls": false 00:15:38.855 } 00:15:38.855 }, 00:15:38.855 { 00:15:38.855 "method": "sock_impl_set_options", 00:15:38.855 "params": { 00:15:38.855 "impl_name": "posix", 00:15:38.855 "recv_buf_size": 2097152, 00:15:38.855 "send_buf_size": 2097152, 00:15:38.855 "enable_recv_pipe": true, 00:15:38.855 "enable_quickack": false, 00:15:38.855 "enable_placement_id": 0, 00:15:38.855 "enable_zerocopy_send_server": true, 00:15:38.855 "enable_zerocopy_send_client": false, 00:15:38.855 "zerocopy_threshold": 0, 00:15:38.855 "tls_version": 0, 00:15:38.855 "enable_ktls": false 00:15:38.855 } 00:15:38.855 }, 00:15:38.855 { 00:15:38.855 "method": "sock_impl_set_options", 00:15:38.855 "params": { 00:15:38.855 "impl_name": "ssl", 00:15:38.855 "recv_buf_size": 4096, 00:15:38.855 "send_buf_size": 4096, 00:15:38.855 "enable_recv_pipe": true, 00:15:38.855 "enable_quickack": false, 00:15:38.855 "enable_placement_id": 0, 00:15:38.855 "enable_zerocopy_send_server": true, 00:15:38.855 "enable_zerocopy_send_client": false, 00:15:38.855 "zerocopy_threshold": 0, 00:15:38.855 "tls_version": 0, 00:15:38.855 "enable_ktls": false 00:15:38.855 } 00:15:38.855 } 00:15:38.855 ] 00:15:38.855 }, 00:15:38.855 { 00:15:38.855 "subsystem": "vmd", 00:15:38.855 "config": [] 00:15:38.855 }, 00:15:38.855 { 00:15:38.855 "subsystem": "accel", 00:15:38.855 "config": [ 00:15:38.855 { 00:15:38.855 "method": "accel_set_options", 00:15:38.855 "params": { 00:15:38.855 "small_cache_size": 128, 00:15:38.855 "large_cache_size": 16, 00:15:38.855 "task_count": 2048, 00:15:38.855 "sequence_count": 2048, 00:15:38.855 "buf_count": 2048 00:15:38.855 } 00:15:38.855 } 00:15:38.855 ] 00:15:38.855 }, 00:15:38.855 { 00:15:38.855 "subsystem": "bdev", 00:15:38.855 "config": [ 00:15:38.855 { 00:15:38.855 "method": "bdev_set_options", 00:15:38.855 "params": { 00:15:38.855 "bdev_io_pool_size": 65535, 00:15:38.855 "bdev_io_cache_size": 256, 00:15:38.856 "bdev_auto_examine": true, 00:15:38.856 "iobuf_small_cache_size": 128, 00:15:38.856 "iobuf_large_cache_size": 16 00:15:38.856 } 00:15:38.856 }, 00:15:38.856 { 00:15:38.856 "method": "bdev_raid_set_options", 00:15:38.856 "params": { 00:15:38.856 "process_window_size_kb": 1024 00:15:38.856 } 00:15:38.856 }, 00:15:38.856 { 00:15:38.856 "method": "bdev_iscsi_set_options", 00:15:38.856 "params": { 00:15:38.856 "timeout_sec": 30 00:15:38.856 } 00:15:38.856 }, 00:15:38.856 { 00:15:38.856 "method": "bdev_nvme_set_options", 00:15:38.856 "params": { 00:15:38.856 "action_on_timeout": "none", 00:15:38.856 "timeout_us": 0, 00:15:38.856 "timeout_admin_us": 0, 00:15:38.856 "keep_alive_timeout_ms": 10000, 00:15:38.856 "arbitration_burst": 0, 00:15:38.856 "low_priority_weight": 0, 00:15:38.856 "medium_priority_weight": 0, 00:15:38.856 "high_priority_weight": 0, 00:15:38.856 "nvme_adminq_poll_period_us": 10000, 00:15:38.856 "nvme_ioq_poll_period_us": 0, 00:15:38.856 "io_queue_requests": 0, 00:15:38.856 "delay_cmd_submit": true, 00:15:38.856 "transport_retry_count": 4, 00:15:38.856 "bdev_retry_count": 3, 00:15:38.856 "transport_ack_timeout": 0, 00:15:38.856 "ctrlr_loss_timeout_sec": 0, 00:15:38.856 "reconnect_delay_sec": 0, 00:15:38.856 "fast_io_fail_timeout_sec": 0, 00:15:38.856 "disable_auto_failback": false, 00:15:38.856 "generate_uuids": false, 00:15:38.856 "transport_tos": 0, 00:15:38.856 "nvme_error_stat": false, 00:15:38.856 "rdma_srq_size": 0, 00:15:38.856 "io_path_stat": false, 00:15:38.856 "allow_accel_sequence": false, 00:15:38.856 "rdma_max_cq_size": 0, 00:15:38.856 "rdma_cm_event_timeout_ms": 0, 00:15:38.856 "dhchap_digests": [ 00:15:38.856 "sha256", 00:15:38.856 "sha384", 00:15:38.856 "sha512" 00:15:38.856 ], 00:15:38.856 "dhchap_dhgroups": [ 00:15:38.856 "null", 00:15:38.856 "ffdhe2048", 00:15:38.856 "ffdhe3072", 00:15:38.856 "ffdhe4096", 00:15:38.856 "ffdhe6144", 00:15:38.856 "ffdhe8192" 00:15:38.856 ] 00:15:38.856 } 00:15:38.856 }, 00:15:38.856 { 00:15:38.856 "method": "bdev_nvme_set_hotplug", 00:15:38.856 "params": { 00:15:38.856 "period_us": 100000, 00:15:38.856 "enable": false 00:15:38.856 } 00:15:38.856 }, 00:15:38.856 { 00:15:38.856 "method": "bdev_malloc_create", 00:15:38.856 "params": { 00:15:38.856 "name": "malloc0", 00:15:38.856 "num_blocks": 8192, 00:15:38.856 "block_size": 4096, 00:15:38.856 "physical_block_size": 4096, 00:15:38.856 "uuid": "6f3bd47a-99d9-4261-8061-63d30d85732b", 00:15:38.856 "optimal_io_boundary": 0 00:15:38.856 } 00:15:38.856 }, 00:15:38.856 { 00:15:38.856 "method": "bdev_wait_for_examine" 00:15:38.856 } 00:15:38.856 ] 00:15:38.856 }, 00:15:38.856 { 00:15:38.856 "subsystem": "nbd", 00:15:38.856 "config": [] 00:15:38.856 }, 00:15:38.856 { 00:15:38.856 "subsystem": "scheduler", 00:15:38.856 "config": [ 00:15:38.856 { 00:15:38.856 "method": "framework_set_scheduler", 00:15:38.856 "params": { 00:15:38.856 "name": "static" 00:15:38.856 } 00:15:38.856 } 00:15:38.856 ] 00:15:38.856 }, 00:15:38.856 { 00:15:38.856 "subsystem": "nvmf", 00:15:38.856 "config": [ 00:15:38.856 { 00:15:38.856 "method": "nvmf_set_config", 00:15:38.856 "params": { 00:15:38.856 "discovery_filter": "match_any", 00:15:38.856 "admin_cmd_passthru": { 00:15:38.856 "identify_ctrlr": false 00:15:38.856 } 00:15:38.856 } 00:15:38.856 }, 00:15:38.856 { 00:15:38.856 "method": "nvmf_set_max_subsystems", 00:15:38.856 "params": { 00:15:38.856 "max_subsystems": 1024 00:15:38.856 } 00:15:38.856 }, 00:15:38.856 { 00:15:38.856 "method": "nvmf_set_crdt", 00:15:38.856 "params": { 00:15:38.856 "crdt1": 0, 00:15:38.856 "crdt2": 0, 00:15:38.856 "crdt3": 0 00:15:38.856 } 00:15:38.856 }, 00:15:38.856 { 00:15:38.856 "method": "nvmf_create_transport", 00:15:38.856 "params": { 00:15:38.856 "trtype": "TCP", 00:15:38.856 "max_queue_depth": 128, 00:15:38.856 "max_io_qpairs_per_ctrlr": 127, 00:15:38.856 "in_capsule_data_size": 4096, 00:15:38.856 "max_io_size": 131072, 00:15:38.856 "io_unit_size": 131072, 00:15:38.856 "max_aq_depth": 128, 00:15:38.856 "num_shared_buffers": 511, 00:15:38.856 "buf_cache_size": 4294967295, 00:15:38.856 "dif_insert_or_strip": false, 00:15:38.856 "zcopy": false, 00:15:38.856 "c2h_success": false, 00:15:38.856 "sock_priority": 0, 00:15:38.856 "abort_timeout_sec": 1, 00:15:38.856 "ack_timeout": 0 00:15:38.856 } 00:15:38.856 }, 00:15:38.856 { 00:15:38.856 "method": "nvmf_create_subsystem", 00:15:38.856 "params": { 00:15:38.856 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:38.856 "allow_any_host": false, 00:15:38.856 "serial_number": "SPDK00000000000001", 00:15:38.856 "model_number": "SPDK bdev Controller", 00:15:38.856 "max_namespaces": 10, 00:15:38.856 "min_cntlid": 1, 00:15:38.856 "max_cntlid": 65519, 00:15:38.856 "ana_reporting": false 00:15:38.856 } 00:15:38.856 }, 00:15:38.856 { 00:15:38.856 "method": "nvmf_subsystem_add_host", 00:15:38.856 "params": { 00:15:38.856 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:38.856 "host": "nqn.2016-06.io.spdk:host1", 00:15:38.856 "psk": "/tmp/tmp.Y8qtaTnaW9" 00:15:38.856 } 00:15:38.856 }, 00:15:38.856 { 00:15:38.856 "method": "nvmf_subsystem_add_ns", 00:15:38.856 "params": { 00:15:38.856 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:38.856 "namespace": { 00:15:38.856 "nsid": 1, 00:15:38.856 "bdev_name": "malloc0", 00:15:38.856 "nguid": "6F3BD47A99D94261806163D30D85732B", 00:15:38.856 "uuid": "6f3bd47a-99d9-4261-8061-63d30d85732b", 00:15:38.856 "no_auto_visible": false 00:15:38.856 } 00:15:38.856 } 00:15:38.856 }, 00:15:38.856 { 00:15:38.856 "method": "nvmf_subsystem_add_listener", 00:15:38.856 "params": { 00:15:38.856 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:38.856 "listen_address": { 00:15:38.856 "trtype": "TCP", 00:15:38.856 "adrfam": "IPv4", 00:15:38.856 "traddr": "10.0.0.2", 00:15:38.856 "trsvcid": "4420" 00:15:38.856 }, 00:15:38.856 "secure_channel": true 00:15:38.856 } 00:15:38.856 } 00:15:38.856 ] 00:15:38.856 } 00:15:38.856 ] 00:15:38.856 }' 00:15:38.856 16:08:08 -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:39.115 16:08:08 -- target/tls.sh@197 -- # bdevperfconf='{ 00:15:39.115 "subsystems": [ 00:15:39.115 { 00:15:39.115 "subsystem": "keyring", 00:15:39.115 "config": [] 00:15:39.115 }, 00:15:39.115 { 00:15:39.115 "subsystem": "iobuf", 00:15:39.115 "config": [ 00:15:39.115 { 00:15:39.115 "method": "iobuf_set_options", 00:15:39.115 "params": { 00:15:39.115 "small_pool_count": 8192, 00:15:39.115 "large_pool_count": 1024, 00:15:39.115 "small_bufsize": 8192, 00:15:39.115 "large_bufsize": 135168 00:15:39.115 } 00:15:39.115 } 00:15:39.115 ] 00:15:39.115 }, 00:15:39.115 { 00:15:39.115 "subsystem": "sock", 00:15:39.115 "config": [ 00:15:39.115 { 00:15:39.115 "method": "sock_impl_set_options", 00:15:39.115 "params": { 00:15:39.115 "impl_name": "uring", 00:15:39.115 "recv_buf_size": 2097152, 00:15:39.115 "send_buf_size": 2097152, 00:15:39.115 "enable_recv_pipe": true, 00:15:39.115 "enable_quickack": false, 00:15:39.115 "enable_placement_id": 0, 00:15:39.115 "enable_zerocopy_send_server": false, 00:15:39.115 "enable_zerocopy_send_client": false, 00:15:39.115 "zerocopy_threshold": 0, 00:15:39.115 "tls_version": 0, 00:15:39.115 "enable_ktls": false 00:15:39.115 } 00:15:39.115 }, 00:15:39.115 { 00:15:39.115 "method": "sock_impl_set_options", 00:15:39.115 "params": { 00:15:39.115 "impl_name": "posix", 00:15:39.115 "recv_buf_size": 2097152, 00:15:39.115 "send_buf_size": 2097152, 00:15:39.115 "enable_recv_pipe": true, 00:15:39.115 "enable_quickack": false, 00:15:39.115 "enable_placement_id": 0, 00:15:39.115 "enable_zerocopy_send_server": true, 00:15:39.115 "enable_zerocopy_send_client": false, 00:15:39.115 "zerocopy_threshold": 0, 00:15:39.115 "tls_version": 0, 00:15:39.115 "enable_ktls": false 00:15:39.115 } 00:15:39.115 }, 00:15:39.115 { 00:15:39.115 "method": "sock_impl_set_options", 00:15:39.115 "params": { 00:15:39.115 "impl_name": "ssl", 00:15:39.115 "recv_buf_size": 4096, 00:15:39.115 "send_buf_size": 4096, 00:15:39.115 "enable_recv_pipe": true, 00:15:39.115 "enable_quickack": false, 00:15:39.115 "enable_placement_id": 0, 00:15:39.115 "enable_zerocopy_send_server": true, 00:15:39.115 "enable_zerocopy_send_client": false, 00:15:39.115 "zerocopy_threshold": 0, 00:15:39.115 "tls_version": 0, 00:15:39.115 "enable_ktls": false 00:15:39.115 } 00:15:39.116 } 00:15:39.116 ] 00:15:39.116 }, 00:15:39.116 { 00:15:39.116 "subsystem": "vmd", 00:15:39.116 "config": [] 00:15:39.116 }, 00:15:39.116 { 00:15:39.116 "subsystem": "accel", 00:15:39.116 "config": [ 00:15:39.116 { 00:15:39.116 "method": "accel_set_options", 00:15:39.116 "params": { 00:15:39.116 "small_cache_size": 128, 00:15:39.116 "large_cache_size": 16, 00:15:39.116 "task_count": 2048, 00:15:39.116 "sequence_count": 2048, 00:15:39.116 "buf_count": 2048 00:15:39.116 } 00:15:39.116 } 00:15:39.116 ] 00:15:39.116 }, 00:15:39.116 { 00:15:39.116 "subsystem": "bdev", 00:15:39.116 "config": [ 00:15:39.116 { 00:15:39.116 "method": "bdev_set_options", 00:15:39.116 "params": { 00:15:39.116 "bdev_io_pool_size": 65535, 00:15:39.116 "bdev_io_cache_size": 256, 00:15:39.116 "bdev_auto_examine": true, 00:15:39.116 "iobuf_small_cache_size": 128, 00:15:39.116 "iobuf_large_cache_size": 16 00:15:39.116 } 00:15:39.116 }, 00:15:39.116 { 00:15:39.116 "method": "bdev_raid_set_options", 00:15:39.116 "params": { 00:15:39.116 "process_window_size_kb": 1024 00:15:39.116 } 00:15:39.116 }, 00:15:39.116 { 00:15:39.116 "method": "bdev_iscsi_set_options", 00:15:39.116 "params": { 00:15:39.116 "timeout_sec": 30 00:15:39.116 } 00:15:39.116 }, 00:15:39.116 { 00:15:39.116 "method": "bdev_nvme_set_options", 00:15:39.116 "params": { 00:15:39.116 "action_on_timeout": "none", 00:15:39.116 "timeout_us": 0, 00:15:39.116 "timeout_admin_us": 0, 00:15:39.116 "keep_alive_timeout_ms": 10000, 00:15:39.116 "arbitration_burst": 0, 00:15:39.116 "low_priority_weight": 0, 00:15:39.116 "medium_priority_weight": 0, 00:15:39.116 "high_priority_weight": 0, 00:15:39.116 "nvme_adminq_poll_period_us": 10000, 00:15:39.116 "nvme_ioq_poll_period_us": 0, 00:15:39.116 "io_queue_requests": 512, 00:15:39.116 "delay_cmd_submit": true, 00:15:39.116 "transport_retry_count": 4, 00:15:39.116 "bdev_retry_count": 3, 00:15:39.116 "transport_ack_timeout": 0, 00:15:39.116 "ctrlr_loss_timeout_sec": 0, 00:15:39.116 "reconnect_delay_sec": 0, 00:15:39.116 "fast_io_fail_timeout_sec": 0, 00:15:39.116 "disable_auto_failback": false, 00:15:39.116 "generate_uuids": false, 00:15:39.116 "transport_tos": 0, 00:15:39.116 "nvme_error_stat": false, 00:15:39.116 "rdma_srq_size": 0, 00:15:39.116 "io_path_stat": false, 00:15:39.116 "allow_accel_sequence": false, 00:15:39.116 "rdma_max_cq_size": 0, 00:15:39.116 "rdma_cm_event_timeout_ms": 0, 00:15:39.116 "dhchap_digests": [ 00:15:39.116 "sha256", 00:15:39.116 "sha384", 00:15:39.116 "sha512" 00:15:39.116 ], 00:15:39.116 "dhchap_dhgroups": [ 00:15:39.116 "null", 00:15:39.116 "ffdhe2048", 00:15:39.116 "ffdhe3072", 00:15:39.116 "ffdhe4096", 00:15:39.116 "ffdhe6144", 00:15:39.116 "ffdhe8192" 00:15:39.116 ] 00:15:39.116 } 00:15:39.116 }, 00:15:39.116 { 00:15:39.116 "method": "bdev_nvme_attach_controller", 00:15:39.116 "params": { 00:15:39.116 "name": "TLSTEST", 00:15:39.116 "trtype": "TCP", 00:15:39.116 "adrfam": "IPv4", 00:15:39.116 "traddr": "10.0.0.2", 00:15:39.116 "trsvcid": "4420", 00:15:39.116 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:39.116 "prchk_reftag": false, 00:15:39.116 "prchk_guard": false, 00:15:39.116 "ctrlr_loss_timeout_sec": 0, 00:15:39.116 "reconnect_delay_sec": 0, 00:15:39.116 "fast_io_fail_timeout_sec": 0, 00:15:39.116 "psk": "/tmp/tmp.Y8qtaTnaW9", 00:15:39.116 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:39.116 "hdgst": false, 00:15:39.116 "ddgst": false 00:15:39.116 } 00:15:39.116 }, 00:15:39.116 { 00:15:39.116 "method": "bdev_nvme_set_hotplug", 00:15:39.116 "params": { 00:15:39.116 "period_us": 100000, 00:15:39.116 "enable": false 00:15:39.116 } 00:15:39.116 }, 00:15:39.116 { 00:15:39.116 "method": "bdev_wait_for_examine" 00:15:39.116 } 00:15:39.116 ] 00:15:39.116 }, 00:15:39.116 { 00:15:39.116 "subsystem": "nbd", 00:15:39.116 "config": [] 00:15:39.116 } 00:15:39.116 ] 00:15:39.116 }' 00:15:39.116 16:08:08 -- target/tls.sh@199 -- # killprocess 82827 00:15:39.116 16:08:08 -- common/autotest_common.sh@936 -- # '[' -z 82827 ']' 00:15:39.116 16:08:08 -- common/autotest_common.sh@940 -- # kill -0 82827 00:15:39.116 16:08:08 -- common/autotest_common.sh@941 -- # uname 00:15:39.116 16:08:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:39.116 16:08:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82827 00:15:39.116 killing process with pid 82827 00:15:39.116 Received shutdown signal, test time was about 10.000000 seconds 00:15:39.116 00:15:39.116 Latency(us) 00:15:39.116 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:39.116 =================================================================================================================== 00:15:39.116 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:39.116 16:08:08 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:15:39.116 16:08:08 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:15:39.116 16:08:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82827' 00:15:39.116 16:08:08 -- common/autotest_common.sh@955 -- # kill 82827 00:15:39.116 [2024-04-15 16:08:08.953544] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:39.116 16:08:08 -- common/autotest_common.sh@960 -- # wait 82827 00:15:39.374 16:08:09 -- target/tls.sh@200 -- # killprocess 82771 00:15:39.374 16:08:09 -- common/autotest_common.sh@936 -- # '[' -z 82771 ']' 00:15:39.374 16:08:09 -- common/autotest_common.sh@940 -- # kill -0 82771 00:15:39.374 16:08:09 -- common/autotest_common.sh@941 -- # uname 00:15:39.374 16:08:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:39.374 16:08:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82771 00:15:39.374 16:08:09 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:39.374 16:08:09 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:39.374 16:08:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82771' 00:15:39.374 killing process with pid 82771 00:15:39.374 16:08:09 -- common/autotest_common.sh@955 -- # kill 82771 00:15:39.374 [2024-04-15 16:08:09.164477] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:39.374 16:08:09 -- common/autotest_common.sh@960 -- # wait 82771 00:15:39.633 16:08:09 -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:15:39.633 16:08:09 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:39.633 16:08:09 -- target/tls.sh@203 -- # echo '{ 00:15:39.633 "subsystems": [ 00:15:39.633 { 00:15:39.633 "subsystem": "keyring", 00:15:39.633 "config": [] 00:15:39.633 }, 00:15:39.633 { 00:15:39.633 "subsystem": "iobuf", 00:15:39.633 "config": [ 00:15:39.633 { 00:15:39.633 "method": "iobuf_set_options", 00:15:39.633 "params": { 00:15:39.633 "small_pool_count": 8192, 00:15:39.633 "large_pool_count": 1024, 00:15:39.633 "small_bufsize": 8192, 00:15:39.633 "large_bufsize": 135168 00:15:39.633 } 00:15:39.633 } 00:15:39.633 ] 00:15:39.633 }, 00:15:39.633 { 00:15:39.634 "subsystem": "sock", 00:15:39.634 "config": [ 00:15:39.634 { 00:15:39.634 "method": "sock_impl_set_options", 00:15:39.634 "params": { 00:15:39.634 "impl_name": "uring", 00:15:39.634 "recv_buf_size": 2097152, 00:15:39.634 "send_buf_size": 2097152, 00:15:39.634 "enable_recv_pipe": true, 00:15:39.634 "enable_quickack": false, 00:15:39.634 "enable_placement_id": 0, 00:15:39.634 "enable_zerocopy_send_server": false, 00:15:39.634 "enable_zerocopy_send_client": false, 00:15:39.634 "zerocopy_threshold": 0, 00:15:39.634 "tls_version": 0, 00:15:39.634 "enable_ktls": false 00:15:39.634 } 00:15:39.634 }, 00:15:39.634 { 00:15:39.634 "method": "sock_impl_set_options", 00:15:39.634 "params": { 00:15:39.634 "impl_name": "posix", 00:15:39.634 "recv_buf_size": 2097152, 00:15:39.634 "send_buf_size": 2097152, 00:15:39.634 "enable_recv_pipe": true, 00:15:39.634 "enable_quickack": false, 00:15:39.634 "enable_placement_id": 0, 00:15:39.634 "enable_zerocopy_send_server": true, 00:15:39.634 "enable_zerocopy_send_client": false, 00:15:39.634 "zerocopy_threshold": 0, 00:15:39.634 "tls_version": 0, 00:15:39.634 "enable_ktls": false 00:15:39.634 } 00:15:39.634 }, 00:15:39.634 { 00:15:39.634 "method": "sock_impl_set_options", 00:15:39.634 "params": { 00:15:39.634 "impl_name": "ssl", 00:15:39.634 "recv_buf_size": 4096, 00:15:39.634 "send_buf_size": 4096, 00:15:39.634 "enable_recv_pipe": true, 00:15:39.634 "enable_quickack": false, 00:15:39.634 "enable_placement_id": 0, 00:15:39.634 "enable_zerocopy_send_server": true, 00:15:39.634 "enable_zerocopy_send_client": false, 00:15:39.634 "zerocopy_threshold": 0, 00:15:39.634 "tls_version": 0, 00:15:39.634 "enable_ktls": false 00:15:39.634 } 00:15:39.634 } 00:15:39.634 ] 00:15:39.634 }, 00:15:39.634 { 00:15:39.634 "subsystem": "vmd", 00:15:39.634 "config": [] 00:15:39.634 }, 00:15:39.634 { 00:15:39.634 "subsystem": "accel", 00:15:39.634 "config": [ 00:15:39.634 { 00:15:39.634 "method": "accel_set_options", 00:15:39.634 "params": { 00:15:39.634 "small_cache_size": 128, 00:15:39.634 "large_cache_size": 16, 00:15:39.634 "task_count": 2048, 00:15:39.634 "sequence_count": 2048, 00:15:39.634 "buf_count": 2048 00:15:39.634 } 00:15:39.634 } 00:15:39.634 ] 00:15:39.634 }, 00:15:39.634 { 00:15:39.634 "subsystem": "bdev", 00:15:39.634 "config": [ 00:15:39.634 { 00:15:39.634 "method": "bdev_set_options", 00:15:39.634 "params": { 00:15:39.634 "bdev_io_pool_size": 65535, 00:15:39.634 "bdev_io_cache_size": 256, 00:15:39.634 "bdev_auto_examine": true, 00:15:39.634 "iobuf_small_cache_size": 128, 00:15:39.634 "iobuf_large_cache_size": 16 00:15:39.634 } 00:15:39.634 }, 00:15:39.634 { 00:15:39.634 "method": "bdev_raid_set_options", 00:15:39.634 "params": { 00:15:39.634 "process_window_size_kb": 1024 00:15:39.634 } 00:15:39.634 }, 00:15:39.634 { 00:15:39.634 "method": "bdev_iscsi_set_options", 00:15:39.634 "params": { 00:15:39.634 "timeout_sec": 30 00:15:39.634 } 00:15:39.634 }, 00:15:39.634 { 00:15:39.634 "method": "bdev_nvme_set_options", 00:15:39.634 "params": { 00:15:39.634 "action_on_timeout": "none", 00:15:39.634 "timeout_us": 0, 00:15:39.634 "timeout_admin_us": 0, 00:15:39.634 "keep_alive_timeout_ms": 10000, 00:15:39.634 "arbitration_burst": 0, 00:15:39.634 "low_priority_weight": 0, 00:15:39.634 "medium_priority_weight": 0, 00:15:39.634 "high_priority_weight": 0, 00:15:39.634 "nvme_adminq_poll_period_us": 10000, 00:15:39.634 "nvme_ioq_poll_period_us": 0, 00:15:39.634 "io_queue_requests": 0, 00:15:39.634 "delay_cmd_submit": true, 00:15:39.634 "transport_retry_count": 4, 00:15:39.634 "bdev_retry_count": 3, 00:15:39.634 "transport_ack_timeout": 0, 00:15:39.634 "ctrlr_loss_timeout_sec": 0, 00:15:39.634 "reconnect_delay_sec": 0, 00:15:39.634 "fast_io_fail_timeout_sec": 0, 00:15:39.634 "disable_auto_failback": false, 00:15:39.634 "generate_uuids": false, 00:15:39.634 "transport_tos": 0, 00:15:39.634 "nvme_error_stat": false, 00:15:39.634 "rdma_srq_size": 0, 00:15:39.634 "io_path_stat": false, 00:15:39.634 "allow_acc 16:08:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:39.634 el_sequence": false, 00:15:39.634 "rdma_max_cq_size": 0, 00:15:39.634 "rdma_cm_event_timeout_ms": 0, 00:15:39.634 "dhchap_digests": [ 00:15:39.634 "sha256", 00:15:39.634 "sha384", 00:15:39.634 "sha512" 00:15:39.634 ], 00:15:39.634 "dhchap_dhgroups": [ 00:15:39.634 "null", 00:15:39.634 "ffdhe2048", 00:15:39.634 "ffdhe3072", 00:15:39.634 "ffdhe4096", 00:15:39.634 "ffdhe6144", 00:15:39.634 "ffdhe8192" 00:15:39.634 ] 00:15:39.634 } 00:15:39.634 }, 00:15:39.634 { 00:15:39.634 "method": "bdev_nvme_set_hotplug", 00:15:39.634 "params": { 00:15:39.634 "period_us": 100000, 00:15:39.634 "enable": false 00:15:39.634 } 00:15:39.634 }, 00:15:39.634 { 00:15:39.634 "method": "bdev_malloc_create", 00:15:39.634 "params": { 00:15:39.634 "name": "malloc0", 00:15:39.634 "num_blocks": 8192, 00:15:39.634 "block_size": 4096, 00:15:39.634 "physical_block_size": 4096, 00:15:39.634 "uuid": "6f3bd47a-99d9-4261-8061-63d30d85732b", 00:15:39.634 "optimal_io_boundary": 0 00:15:39.634 } 00:15:39.634 }, 00:15:39.634 { 00:15:39.634 "method": "bdev_wait_for_examine" 00:15:39.634 } 00:15:39.634 ] 00:15:39.634 }, 00:15:39.634 { 00:15:39.634 "subsystem": "nbd", 00:15:39.634 "config": [] 00:15:39.634 }, 00:15:39.634 { 00:15:39.634 "subsystem": "scheduler", 00:15:39.634 "config": [ 00:15:39.634 { 00:15:39.634 "method": "framework_set_scheduler", 00:15:39.634 "params": { 00:15:39.634 "name": "static" 00:15:39.634 } 00:15:39.634 } 00:15:39.634 ] 00:15:39.634 }, 00:15:39.634 { 00:15:39.634 "subsystem": "nvmf", 00:15:39.634 "config": [ 00:15:39.634 { 00:15:39.634 "method": "nvmf_set_config", 00:15:39.634 "params": { 00:15:39.634 "discovery_filter": "match_any", 00:15:39.634 "admin_cmd_passthru": { 00:15:39.634 "identify_ctrlr": false 00:15:39.634 } 00:15:39.634 } 00:15:39.634 }, 00:15:39.634 { 00:15:39.634 "method": "nvmf_set_max_subsystems", 00:15:39.634 "params": { 00:15:39.634 "max_subsystems": 1024 00:15:39.634 } 00:15:39.634 }, 00:15:39.634 { 00:15:39.634 "method": "nvmf_set_crdt", 00:15:39.634 "params": { 00:15:39.634 "crdt1": 0, 00:15:39.634 "crdt2": 0, 00:15:39.634 "crdt3": 0 00:15:39.634 } 00:15:39.634 }, 00:15:39.634 { 00:15:39.634 "method": "nvmf_create_transport", 00:15:39.634 "params": { 00:15:39.634 "trtype": "TCP", 00:15:39.634 "max_queue_depth": 128, 00:15:39.634 "max_io_qpairs_per_ctrlr": 127, 00:15:39.634 "in_capsule_data_size": 4096, 00:15:39.634 "max_io_size": 131072, 00:15:39.634 "io_unit_size": 131072, 00:15:39.634 "max_aq_depth": 128, 00:15:39.634 "num_shared_buffers": 511, 00:15:39.634 "buf_cache_size": 4294967295, 00:15:39.634 "dif_insert_or_strip": false, 00:15:39.634 "zcopy": false, 00:15:39.634 "c2h_success": false, 00:15:39.634 "sock_priority": 0, 00:15:39.634 "abort_timeout_sec": 1, 00:15:39.634 "ack_timeout": 0 00:15:39.634 } 00:15:39.634 }, 00:15:39.634 { 00:15:39.634 "method": "nvmf_create_subsystem", 00:15:39.634 "params": { 00:15:39.634 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:39.634 "allow_any_host": false, 00:15:39.634 "serial_number": "SPDK00000000000001", 00:15:39.634 "model_number": "SPDK bdev Controller", 00:15:39.634 "max_namespaces": 10, 00:15:39.634 "min_cntlid": 1, 00:15:39.634 "max_cntlid": 65519, 00:15:39.634 "ana_reporting": false 00:15:39.634 } 00:15:39.634 }, 00:15:39.634 { 00:15:39.634 "method": "nvmf_subsystem_add_host", 00:15:39.634 "params": { 00:15:39.634 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:39.634 "host": "nqn.2016-06.io.spdk:host1", 00:15:39.634 "psk": "/tmp/tmp.Y8qtaTnaW9" 00:15:39.634 } 00:15:39.634 }, 00:15:39.634 { 00:15:39.634 "method": "nvmf_subsystem_add_ns", 00:15:39.634 "params": { 00:15:39.634 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:39.634 "namespace": { 00:15:39.634 "nsid": 1, 00:15:39.634 "bdev_name": "malloc0", 00:15:39.634 "nguid": "6F3BD47A99D94261806163D30D85732B", 00:15:39.634 "uuid": "6f3bd47a-99d9-4261-8061-63d30d85732b", 00:15:39.634 "no_auto_visible": false 00:15:39.634 } 00:15:39.634 } 00:15:39.634 }, 00:15:39.634 { 00:15:39.634 "method": "nvmf_subsystem_add_listener", 00:15:39.634 "params": { 00:15:39.634 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:39.634 "listen_address": { 00:15:39.634 "trtype": "TCP", 00:15:39.634 "adrfam": "IPv4", 00:15:39.634 "traddr": "10.0.0.2", 00:15:39.634 "trsvcid": "4420" 00:15:39.634 }, 00:15:39.634 "secure_channel": true 00:15:39.634 } 00:15:39.634 } 00:15:39.634 ] 00:15:39.634 } 00:15:39.634 ] 00:15:39.634 }' 00:15:39.634 16:08:09 -- common/autotest_common.sh@10 -- # set +x 00:15:39.634 16:08:09 -- nvmf/common.sh@470 -- # nvmfpid=82870 00:15:39.634 16:08:09 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:15:39.634 16:08:09 -- nvmf/common.sh@471 -- # waitforlisten 82870 00:15:39.634 16:08:09 -- common/autotest_common.sh@817 -- # '[' -z 82870 ']' 00:15:39.634 16:08:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:39.634 16:08:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:39.634 16:08:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:39.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:39.634 16:08:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:39.634 16:08:09 -- common/autotest_common.sh@10 -- # set +x 00:15:39.634 [2024-04-15 16:08:09.405140] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:15:39.634 [2024-04-15 16:08:09.405392] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:39.634 [2024-04-15 16:08:09.545860] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.634 [2024-04-15 16:08:09.592986] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:39.634 [2024-04-15 16:08:09.593243] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:39.634 [2024-04-15 16:08:09.593402] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:39.634 [2024-04-15 16:08:09.593456] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:39.634 [2024-04-15 16:08:09.593486] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:39.634 [2024-04-15 16:08:09.593610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:39.892 [2024-04-15 16:08:09.794605] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:39.892 [2024-04-15 16:08:09.810575] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:39.892 [2024-04-15 16:08:09.826553] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:39.892 [2024-04-15 16:08:09.826890] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:40.456 16:08:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:40.456 16:08:10 -- common/autotest_common.sh@850 -- # return 0 00:15:40.456 16:08:10 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:40.456 16:08:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:40.456 16:08:10 -- common/autotest_common.sh@10 -- # set +x 00:15:40.715 16:08:10 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:40.715 16:08:10 -- target/tls.sh@207 -- # bdevperf_pid=82902 00:15:40.715 16:08:10 -- target/tls.sh@208 -- # waitforlisten 82902 /var/tmp/bdevperf.sock 00:15:40.715 16:08:10 -- common/autotest_common.sh@817 -- # '[' -z 82902 ']' 00:15:40.715 16:08:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:40.715 16:08:10 -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:15:40.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:40.715 16:08:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:40.715 16:08:10 -- target/tls.sh@204 -- # echo '{ 00:15:40.715 "subsystems": [ 00:15:40.715 { 00:15:40.715 "subsystem": "keyring", 00:15:40.715 "config": [] 00:15:40.715 }, 00:15:40.715 { 00:15:40.715 "subsystem": "iobuf", 00:15:40.715 "config": [ 00:15:40.715 { 00:15:40.715 "method": "iobuf_set_options", 00:15:40.715 "params": { 00:15:40.715 "small_pool_count": 8192, 00:15:40.715 "large_pool_count": 1024, 00:15:40.715 "small_bufsize": 8192, 00:15:40.715 "large_bufsize": 135168 00:15:40.715 } 00:15:40.715 } 00:15:40.715 ] 00:15:40.715 }, 00:15:40.715 { 00:15:40.715 "subsystem": "sock", 00:15:40.715 "config": [ 00:15:40.715 { 00:15:40.715 "method": "sock_impl_set_options", 00:15:40.715 "params": { 00:15:40.715 "impl_name": "uring", 00:15:40.715 "recv_buf_size": 2097152, 00:15:40.715 "send_buf_size": 2097152, 00:15:40.715 "enable_recv_pipe": true, 00:15:40.715 "enable_quickack": false, 00:15:40.715 "enable_placement_id": 0, 00:15:40.715 "enable_zerocopy_send_server": false, 00:15:40.715 "enable_zerocopy_send_client": false, 00:15:40.715 "zerocopy_threshold": 0, 00:15:40.715 "tls_version": 0, 00:15:40.715 "enable_ktls": false 00:15:40.715 } 00:15:40.715 }, 00:15:40.715 { 00:15:40.715 "method": "sock_impl_set_options", 00:15:40.715 "params": { 00:15:40.715 "impl_name": "posix", 00:15:40.715 "recv_buf_size": 2097152, 00:15:40.715 "send_buf_size": 2097152, 00:15:40.715 "enable_recv_pipe": true, 00:15:40.715 "enable_quickack": false, 00:15:40.715 "enable_placement_id": 0, 00:15:40.715 "enable_zerocopy_send_server": true, 00:15:40.715 "enable_zerocopy_send_client": false, 00:15:40.715 "zerocopy_threshold": 0, 00:15:40.715 "tls_version": 0, 00:15:40.715 "enable_ktls": false 00:15:40.715 } 00:15:40.715 }, 00:15:40.715 { 00:15:40.715 "method": "sock_impl_set_options", 00:15:40.715 "params": { 00:15:40.715 "impl_name": "ssl", 00:15:40.715 "recv_buf_size": 4096, 00:15:40.715 "send_buf_size": 4096, 00:15:40.715 "enable_recv_pipe": true, 00:15:40.715 "enable_quickack": false, 00:15:40.715 "enable_placement_id": 0, 00:15:40.715 "enable_zerocopy_send_server": true, 00:15:40.715 "enable_zerocopy_send_client": false, 00:15:40.715 "zerocopy_threshold": 0, 00:15:40.715 "tls_version": 0, 00:15:40.715 "enable_ktls": false 00:15:40.715 } 00:15:40.715 } 00:15:40.715 ] 00:15:40.715 }, 00:15:40.715 { 00:15:40.715 "subsystem": "vmd", 00:15:40.715 "config": [] 00:15:40.715 }, 00:15:40.715 { 00:15:40.715 "subsystem": "accel", 00:15:40.715 "config": [ 00:15:40.715 { 00:15:40.715 "method": "accel_set_options", 00:15:40.715 "params": { 00:15:40.715 "small_cache_size": 128, 00:15:40.715 "large_cache_size": 16, 00:15:40.715 "task_count": 2048, 00:15:40.715 "sequence_count": 2048, 00:15:40.715 "buf_count": 2048 00:15:40.715 } 00:15:40.715 } 00:15:40.715 ] 00:15:40.715 }, 00:15:40.715 { 00:15:40.715 "subsystem": "bdev", 00:15:40.715 "config": [ 00:15:40.715 { 00:15:40.715 "method": "bdev_set_options", 00:15:40.715 "params": { 00:15:40.715 "bdev_io_pool_size": 65535, 00:15:40.715 "bdev_io_cache_size": 256, 00:15:40.715 "bdev_auto_examine": true, 00:15:40.715 "iobuf_small_cache_size": 128, 00:15:40.715 "iobuf_large_cache_size": 16 00:15:40.715 } 00:15:40.715 }, 00:15:40.715 { 00:15:40.715 "method": "bdev_raid_set_options", 00:15:40.715 "params": { 00:15:40.715 "process_window_size_kb": 1024 00:15:40.715 } 00:15:40.715 }, 00:15:40.715 { 00:15:40.715 "method": "bdev_iscsi_set_options", 00:15:40.715 "params": { 00:15:40.715 "timeout_sec": 30 00:15:40.715 } 00:15:40.715 }, 00:15:40.715 { 00:15:40.715 "method": "bdev_nvme_set_options", 00:15:40.715 "params": { 00:15:40.715 "action_on_timeout": "none", 00:15:40.715 "timeout_us": 0, 00:15:40.715 "timeout_admin_us": 0, 00:15:40.715 "keep_alive_timeout_ms": 10000, 00:15:40.715 "arbitration_burst": 0, 00:15:40.715 "low_priority_weight": 0, 00:15:40.715 "medium_priority_weight": 0, 00:15:40.715 "high_priority_weight": 0, 00:15:40.715 "nvme_adminq_poll_period_us": 10000, 00:15:40.715 "nvme_ioq_poll_period_us": 0, 00:15:40.715 "io_queue_requests": 512, 00:15:40.715 "delay_cmd_submit": true, 00:15:40.715 "transport_retry_count": 4, 00:15:40.715 "bdev_retry_count": 3, 00:15:40.715 "transport_ack_timeout": 0, 00:15:40.715 "ctrlr_loss_timeout_sec": 0, 00:15:40.715 "reconnect_delay_sec": 0, 00:15:40.715 "fast_io_fail_timeout_sec": 0, 00:15:40.715 "disable_auto_failback": false, 00:15:40.715 "generate_uuids": false, 00:15:40.715 "transport_tos": 0, 00:15:40.715 "nvme_error_stat": false, 00:15:40.715 "rdma_srq_size": 0, 00:15:40.715 "io_path_stat": false, 00:15:40.715 "allow_accel_sequence": false, 00:15:40.715 "rdma_max_cq_size": 0, 00:15:40.715 "rdma_cm_event_timeout_ms": 0, 00:15:40.715 "dhchap_digests": [ 00:15:40.715 "sha256", 00:15:40.715 "sha384", 00:15:40.715 "sha512" 00:15:40.715 ], 00:15:40.715 "dhchap_dhgroups": [ 00:15:40.715 "null", 00:15:40.715 "ffdhe2048", 00:15:40.715 "ffdhe3072", 00:15:40.715 "ffdhe4096", 00:15:40.715 "ffdhe6144", 00:15:40.715 "ffdhe8192" 00:15:40.715 ] 00:15:40.715 } 00:15:40.715 }, 00:15:40.715 { 00:15:40.715 "method": "bdev_nvme_attach_controller", 00:15:40.715 "params": { 00:15:40.715 "name": "TLSTEST", 00:15:40.715 "trtype": "TCP", 00:15:40.715 "adrfam": "IPv4", 00:15:40.715 "traddr": "10.0.0.2", 00:15:40.715 "trsvcid": "4420", 00:15:40.715 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:40.715 "prchk_reftag": false, 00:15:40.715 "prchk_guard": false, 00:15:40.715 "ctrlr_loss_timeout_sec": 0, 00:15:40.715 "reconnect_delay_sec": 0, 00:15:40.715 "fast_io_fail_timeout_sec": 0, 00:15:40.715 "psk": "/tmp/tmp.Y8qtaTnaW9", 00:15:40.715 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:40.715 "hdgst": false, 00:15:40.715 "ddgst": false 00:15:40.715 } 00:15:40.715 }, 00:15:40.715 { 00:15:40.715 "method": "bdev_nvme_set_hotplug", 00:15:40.715 "params": { 00:15:40.715 "period_us": 100000, 00:15:40.715 "enable": false 00:15:40.715 } 00:15:40.715 }, 00:15:40.715 { 00:15:40.715 "method": "bdev_wait_for_examine" 00:15:40.715 } 00:15:40.715 ] 00:15:40.715 }, 00:15:40.715 { 00:15:40.715 "subsystem": "nbd", 00:15:40.715 "config": [] 00:15:40.715 } 00:15:40.715 ] 00:15:40.715 }' 00:15:40.715 16:08:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:40.715 16:08:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:40.715 16:08:10 -- common/autotest_common.sh@10 -- # set +x 00:15:40.715 [2024-04-15 16:08:10.481038] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:15:40.715 [2024-04-15 16:08:10.481328] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82902 ] 00:15:40.715 [2024-04-15 16:08:10.630934] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:40.974 [2024-04-15 16:08:10.684037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:40.974 [2024-04-15 16:08:10.834107] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:40.974 [2024-04-15 16:08:10.834469] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:41.539 16:08:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:41.539 16:08:11 -- common/autotest_common.sh@850 -- # return 0 00:15:41.539 16:08:11 -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:41.798 Running I/O for 10 seconds... 00:15:51.772 00:15:51.772 Latency(us) 00:15:51.772 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:51.772 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:51.772 Verification LBA range: start 0x0 length 0x2000 00:15:51.772 TLSTESTn1 : 10.02 5577.43 21.79 0.00 0.00 22908.82 6335.15 25340.59 00:15:51.772 =================================================================================================================== 00:15:51.772 Total : 5577.43 21.79 0.00 0.00 22908.82 6335.15 25340.59 00:15:51.772 0 00:15:51.772 16:08:21 -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:51.772 16:08:21 -- target/tls.sh@214 -- # killprocess 82902 00:15:51.772 16:08:21 -- common/autotest_common.sh@936 -- # '[' -z 82902 ']' 00:15:51.772 16:08:21 -- common/autotest_common.sh@940 -- # kill -0 82902 00:15:51.772 16:08:21 -- common/autotest_common.sh@941 -- # uname 00:15:51.772 16:08:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:51.772 16:08:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82902 00:15:51.772 killing process with pid 82902 00:15:51.772 Received shutdown signal, test time was about 10.000000 seconds 00:15:51.772 00:15:51.772 Latency(us) 00:15:51.772 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:51.772 =================================================================================================================== 00:15:51.772 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:51.772 16:08:21 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:15:51.772 16:08:21 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:15:51.772 16:08:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82902' 00:15:51.772 16:08:21 -- common/autotest_common.sh@955 -- # kill 82902 00:15:51.772 [2024-04-15 16:08:21.638558] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:51.772 16:08:21 -- common/autotest_common.sh@960 -- # wait 82902 00:15:52.031 16:08:21 -- target/tls.sh@215 -- # killprocess 82870 00:15:52.032 16:08:21 -- common/autotest_common.sh@936 -- # '[' -z 82870 ']' 00:15:52.032 16:08:21 -- common/autotest_common.sh@940 -- # kill -0 82870 00:15:52.032 16:08:21 -- common/autotest_common.sh@941 -- # uname 00:15:52.032 16:08:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:52.032 16:08:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82870 00:15:52.032 16:08:21 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:52.032 16:08:21 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:52.032 16:08:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82870' 00:15:52.032 killing process with pid 82870 00:15:52.032 16:08:21 -- common/autotest_common.sh@955 -- # kill 82870 00:15:52.032 [2024-04-15 16:08:21.858153] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:52.032 16:08:21 -- common/autotest_common.sh@960 -- # wait 82870 00:15:52.291 16:08:22 -- target/tls.sh@218 -- # nvmfappstart 00:15:52.291 16:08:22 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:52.291 16:08:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:52.291 16:08:22 -- common/autotest_common.sh@10 -- # set +x 00:15:52.291 16:08:22 -- nvmf/common.sh@470 -- # nvmfpid=83035 00:15:52.291 16:08:22 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:52.291 16:08:22 -- nvmf/common.sh@471 -- # waitforlisten 83035 00:15:52.291 16:08:22 -- common/autotest_common.sh@817 -- # '[' -z 83035 ']' 00:15:52.291 16:08:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.291 16:08:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:52.291 16:08:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.291 16:08:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:52.291 16:08:22 -- common/autotest_common.sh@10 -- # set +x 00:15:52.291 [2024-04-15 16:08:22.099634] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:15:52.291 [2024-04-15 16:08:22.099866] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:52.291 [2024-04-15 16:08:22.240175] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.550 [2024-04-15 16:08:22.283605] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:52.550 [2024-04-15 16:08:22.283804] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:52.550 [2024-04-15 16:08:22.283967] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:52.550 [2024-04-15 16:08:22.284017] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:52.550 [2024-04-15 16:08:22.284045] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:52.550 [2024-04-15 16:08:22.284098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.116 16:08:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:53.117 16:08:23 -- common/autotest_common.sh@850 -- # return 0 00:15:53.117 16:08:23 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:53.117 16:08:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:53.117 16:08:23 -- common/autotest_common.sh@10 -- # set +x 00:15:53.375 16:08:23 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:53.375 16:08:23 -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.Y8qtaTnaW9 00:15:53.375 16:08:23 -- target/tls.sh@49 -- # local key=/tmp/tmp.Y8qtaTnaW9 00:15:53.375 16:08:23 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:53.375 [2024-04-15 16:08:23.320635] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:53.375 16:08:23 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:53.634 16:08:23 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:53.893 [2024-04-15 16:08:23.772746] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:53.893 [2024-04-15 16:08:23.773187] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:53.893 16:08:23 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:54.209 malloc0 00:15:54.209 16:08:24 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:54.469 16:08:24 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Y8qtaTnaW9 00:15:54.469 [2024-04-15 16:08:24.382268] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:54.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:54.469 16:08:24 -- target/tls.sh@222 -- # bdevperf_pid=83094 00:15:54.469 16:08:24 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:54.469 16:08:24 -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:54.469 16:08:24 -- target/tls.sh@225 -- # waitforlisten 83094 /var/tmp/bdevperf.sock 00:15:54.469 16:08:24 -- common/autotest_common.sh@817 -- # '[' -z 83094 ']' 00:15:54.469 16:08:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:54.469 16:08:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:54.469 16:08:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:54.469 16:08:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:54.469 16:08:24 -- common/autotest_common.sh@10 -- # set +x 00:15:54.728 [2024-04-15 16:08:24.442207] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:15:54.728 [2024-04-15 16:08:24.442481] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83094 ] 00:15:54.728 [2024-04-15 16:08:24.582190] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.728 [2024-04-15 16:08:24.636823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:54.986 16:08:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:54.986 16:08:24 -- common/autotest_common.sh@850 -- # return 0 00:15:54.986 16:08:24 -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Y8qtaTnaW9 00:15:54.986 16:08:24 -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:55.244 [2024-04-15 16:08:25.202022] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:55.502 nvme0n1 00:15:55.502 16:08:25 -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:55.502 Running I/O for 1 seconds... 00:15:56.878 00:15:56.878 Latency(us) 00:15:56.878 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:56.878 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:56.878 Verification LBA range: start 0x0 length 0x2000 00:15:56.878 nvme0n1 : 1.05 5167.84 20.19 0.00 0.00 24332.77 3151.97 52179.14 00:15:56.878 =================================================================================================================== 00:15:56.878 Total : 5167.84 20.19 0.00 0.00 24332.77 3151.97 52179.14 00:15:56.878 0 00:15:56.878 16:08:26 -- target/tls.sh@234 -- # killprocess 83094 00:15:56.878 16:08:26 -- common/autotest_common.sh@936 -- # '[' -z 83094 ']' 00:15:56.878 16:08:26 -- common/autotest_common.sh@940 -- # kill -0 83094 00:15:56.878 16:08:26 -- common/autotest_common.sh@941 -- # uname 00:15:56.878 16:08:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:56.878 16:08:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83094 00:15:56.878 16:08:26 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:56.878 16:08:26 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:56.878 16:08:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83094' 00:15:56.878 killing process with pid 83094 00:15:56.878 Received shutdown signal, test time was about 1.000000 seconds 00:15:56.878 00:15:56.878 Latency(us) 00:15:56.878 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:56.878 =================================================================================================================== 00:15:56.878 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:56.878 16:08:26 -- common/autotest_common.sh@955 -- # kill 83094 00:15:56.878 16:08:26 -- common/autotest_common.sh@960 -- # wait 83094 00:15:56.878 16:08:26 -- target/tls.sh@235 -- # killprocess 83035 00:15:56.878 16:08:26 -- common/autotest_common.sh@936 -- # '[' -z 83035 ']' 00:15:56.878 16:08:26 -- common/autotest_common.sh@940 -- # kill -0 83035 00:15:56.878 16:08:26 -- common/autotest_common.sh@941 -- # uname 00:15:56.878 16:08:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:56.878 16:08:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83035 00:15:56.878 16:08:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:56.878 16:08:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:56.878 16:08:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83035' 00:15:56.878 killing process with pid 83035 00:15:56.878 16:08:26 -- common/autotest_common.sh@955 -- # kill 83035 00:15:56.878 [2024-04-15 16:08:26.743946] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:56.878 16:08:26 -- common/autotest_common.sh@960 -- # wait 83035 00:15:57.138 16:08:26 -- target/tls.sh@238 -- # nvmfappstart 00:15:57.138 16:08:26 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:57.138 16:08:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:57.138 16:08:26 -- common/autotest_common.sh@10 -- # set +x 00:15:57.138 16:08:26 -- nvmf/common.sh@470 -- # nvmfpid=83133 00:15:57.138 16:08:26 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:57.138 16:08:26 -- nvmf/common.sh@471 -- # waitforlisten 83133 00:15:57.138 16:08:26 -- common/autotest_common.sh@817 -- # '[' -z 83133 ']' 00:15:57.138 16:08:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:57.138 16:08:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:57.138 16:08:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:57.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:57.138 16:08:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:57.138 16:08:26 -- common/autotest_common.sh@10 -- # set +x 00:15:57.138 [2024-04-15 16:08:27.007564] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:15:57.138 [2024-04-15 16:08:27.007807] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:57.397 [2024-04-15 16:08:27.148986] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.397 [2024-04-15 16:08:27.198182] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:57.397 [2024-04-15 16:08:27.198437] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:57.397 [2024-04-15 16:08:27.198566] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:57.397 [2024-04-15 16:08:27.198641] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:57.397 [2024-04-15 16:08:27.198670] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:57.397 [2024-04-15 16:08:27.198792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.397 16:08:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:57.397 16:08:27 -- common/autotest_common.sh@850 -- # return 0 00:15:57.397 16:08:27 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:57.397 16:08:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:57.397 16:08:27 -- common/autotest_common.sh@10 -- # set +x 00:15:57.397 16:08:27 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:57.397 16:08:27 -- target/tls.sh@239 -- # rpc_cmd 00:15:57.397 16:08:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:57.397 16:08:27 -- common/autotest_common.sh@10 -- # set +x 00:15:57.397 [2024-04-15 16:08:27.346205] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:57.397 malloc0 00:15:57.657 [2024-04-15 16:08:27.375721] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:57.657 [2024-04-15 16:08:27.375985] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:57.657 16:08:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:57.657 16:08:27 -- target/tls.sh@252 -- # bdevperf_pid=83158 00:15:57.657 16:08:27 -- target/tls.sh@250 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:57.657 16:08:27 -- target/tls.sh@254 -- # waitforlisten 83158 /var/tmp/bdevperf.sock 00:15:57.657 16:08:27 -- common/autotest_common.sh@817 -- # '[' -z 83158 ']' 00:15:57.657 16:08:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:57.657 16:08:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:57.657 16:08:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:57.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:57.657 16:08:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:57.657 16:08:27 -- common/autotest_common.sh@10 -- # set +x 00:15:57.657 [2024-04-15 16:08:27.447813] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:15:57.657 [2024-04-15 16:08:27.448070] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83158 ] 00:15:57.657 [2024-04-15 16:08:27.586866] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.915 [2024-04-15 16:08:27.640865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:57.915 16:08:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:57.915 16:08:27 -- common/autotest_common.sh@850 -- # return 0 00:15:57.915 16:08:27 -- target/tls.sh@255 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Y8qtaTnaW9 00:15:58.174 16:08:27 -- target/tls.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:58.433 [2024-04-15 16:08:28.238011] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:58.433 nvme0n1 00:15:58.433 16:08:28 -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:58.697 Running I/O for 1 seconds... 00:15:59.646 00:15:59.646 Latency(us) 00:15:59.646 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:59.646 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:59.646 Verification LBA range: start 0x0 length 0x2000 00:15:59.646 nvme0n1 : 1.01 5745.64 22.44 0.00 0.00 22111.37 4493.90 18724.57 00:15:59.646 =================================================================================================================== 00:15:59.646 Total : 5745.64 22.44 0.00 0.00 22111.37 4493.90 18724.57 00:15:59.646 0 00:15:59.646 16:08:29 -- target/tls.sh@263 -- # rpc_cmd save_config 00:15:59.647 16:08:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:59.647 16:08:29 -- common/autotest_common.sh@10 -- # set +x 00:15:59.906 16:08:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:59.906 16:08:29 -- target/tls.sh@263 -- # tgtcfg='{ 00:15:59.906 "subsystems": [ 00:15:59.906 { 00:15:59.906 "subsystem": "keyring", 00:15:59.906 "config": [ 00:15:59.906 { 00:15:59.906 "method": "keyring_file_add_key", 00:15:59.906 "params": { 00:15:59.906 "name": "key0", 00:15:59.906 "path": "/tmp/tmp.Y8qtaTnaW9" 00:15:59.906 } 00:15:59.906 } 00:15:59.906 ] 00:15:59.906 }, 00:15:59.906 { 00:15:59.906 "subsystem": "iobuf", 00:15:59.906 "config": [ 00:15:59.906 { 00:15:59.906 "method": "iobuf_set_options", 00:15:59.906 "params": { 00:15:59.906 "small_pool_count": 8192, 00:15:59.906 "large_pool_count": 1024, 00:15:59.906 "small_bufsize": 8192, 00:15:59.906 "large_bufsize": 135168 00:15:59.906 } 00:15:59.906 } 00:15:59.906 ] 00:15:59.906 }, 00:15:59.906 { 00:15:59.906 "subsystem": "sock", 00:15:59.906 "config": [ 00:15:59.906 { 00:15:59.906 "method": "sock_impl_set_options", 00:15:59.906 "params": { 00:15:59.906 "impl_name": "uring", 00:15:59.906 "recv_buf_size": 2097152, 00:15:59.906 "send_buf_size": 2097152, 00:15:59.906 "enable_recv_pipe": true, 00:15:59.906 "enable_quickack": false, 00:15:59.906 "enable_placement_id": 0, 00:15:59.906 "enable_zerocopy_send_server": false, 00:15:59.906 "enable_zerocopy_send_client": false, 00:15:59.906 "zerocopy_threshold": 0, 00:15:59.906 "tls_version": 0, 00:15:59.906 "enable_ktls": false 00:15:59.906 } 00:15:59.906 }, 00:15:59.906 { 00:15:59.906 "method": "sock_impl_set_options", 00:15:59.906 "params": { 00:15:59.906 "impl_name": "posix", 00:15:59.906 "recv_buf_size": 2097152, 00:15:59.906 "send_buf_size": 2097152, 00:15:59.906 "enable_recv_pipe": true, 00:15:59.906 "enable_quickack": false, 00:15:59.906 "enable_placement_id": 0, 00:15:59.906 "enable_zerocopy_send_server": true, 00:15:59.906 "enable_zerocopy_send_client": false, 00:15:59.906 "zerocopy_threshold": 0, 00:15:59.906 "tls_version": 0, 00:15:59.906 "enable_ktls": false 00:15:59.906 } 00:15:59.906 }, 00:15:59.906 { 00:15:59.906 "method": "sock_impl_set_options", 00:15:59.906 "params": { 00:15:59.906 "impl_name": "ssl", 00:15:59.906 "recv_buf_size": 4096, 00:15:59.906 "send_buf_size": 4096, 00:15:59.906 "enable_recv_pipe": true, 00:15:59.906 "enable_quickack": false, 00:15:59.906 "enable_placement_id": 0, 00:15:59.906 "enable_zerocopy_send_server": true, 00:15:59.906 "enable_zerocopy_send_client": false, 00:15:59.906 "zerocopy_threshold": 0, 00:15:59.906 "tls_version": 0, 00:15:59.906 "enable_ktls": false 00:15:59.906 } 00:15:59.906 } 00:15:59.906 ] 00:15:59.906 }, 00:15:59.906 { 00:15:59.906 "subsystem": "vmd", 00:15:59.906 "config": [] 00:15:59.906 }, 00:15:59.906 { 00:15:59.906 "subsystem": "accel", 00:15:59.906 "config": [ 00:15:59.906 { 00:15:59.906 "method": "accel_set_options", 00:15:59.906 "params": { 00:15:59.906 "small_cache_size": 128, 00:15:59.906 "large_cache_size": 16, 00:15:59.906 "task_count": 2048, 00:15:59.906 "sequence_count": 2048, 00:15:59.906 "buf_count": 2048 00:15:59.906 } 00:15:59.906 } 00:15:59.906 ] 00:15:59.906 }, 00:15:59.906 { 00:15:59.906 "subsystem": "bdev", 00:15:59.906 "config": [ 00:15:59.906 { 00:15:59.906 "method": "bdev_set_options", 00:15:59.906 "params": { 00:15:59.906 "bdev_io_pool_size": 65535, 00:15:59.906 "bdev_io_cache_size": 256, 00:15:59.906 "bdev_auto_examine": true, 00:15:59.906 "iobuf_small_cache_size": 128, 00:15:59.906 "iobuf_large_cache_size": 16 00:15:59.906 } 00:15:59.906 }, 00:15:59.906 { 00:15:59.906 "method": "bdev_raid_set_options", 00:15:59.906 "params": { 00:15:59.906 "process_window_size_kb": 1024 00:15:59.906 } 00:15:59.906 }, 00:15:59.906 { 00:15:59.906 "method": "bdev_iscsi_set_options", 00:15:59.906 "params": { 00:15:59.906 "timeout_sec": 30 00:15:59.906 } 00:15:59.906 }, 00:15:59.906 { 00:15:59.906 "method": "bdev_nvme_set_options", 00:15:59.906 "params": { 00:15:59.906 "action_on_timeout": "none", 00:15:59.906 "timeout_us": 0, 00:15:59.906 "timeout_admin_us": 0, 00:15:59.906 "keep_alive_timeout_ms": 10000, 00:15:59.906 "arbitration_burst": 0, 00:15:59.906 "low_priority_weight": 0, 00:15:59.906 "medium_priority_weight": 0, 00:15:59.906 "high_priority_weight": 0, 00:15:59.906 "nvme_adminq_poll_period_us": 10000, 00:15:59.906 "nvme_ioq_poll_period_us": 0, 00:15:59.906 "io_queue_requests": 0, 00:15:59.906 "delay_cmd_submit": true, 00:15:59.906 "transport_retry_count": 4, 00:15:59.906 "bdev_retry_count": 3, 00:15:59.906 "transport_ack_timeout": 0, 00:15:59.906 "ctrlr_loss_timeout_sec": 0, 00:15:59.906 "reconnect_delay_sec": 0, 00:15:59.906 "fast_io_fail_timeout_sec": 0, 00:15:59.906 "disable_auto_failback": false, 00:15:59.906 "generate_uuids": false, 00:15:59.906 "transport_tos": 0, 00:15:59.906 "nvme_error_stat": false, 00:15:59.906 "rdma_srq_size": 0, 00:15:59.906 "io_path_stat": false, 00:15:59.906 "allow_accel_sequence": false, 00:15:59.906 "rdma_max_cq_size": 0, 00:15:59.906 "rdma_cm_event_timeout_ms": 0, 00:15:59.906 "dhchap_digests": [ 00:15:59.906 "sha256", 00:15:59.906 "sha384", 00:15:59.906 "sha512" 00:15:59.906 ], 00:15:59.906 "dhchap_dhgroups": [ 00:15:59.906 "null", 00:15:59.906 "ffdhe2048", 00:15:59.906 "ffdhe3072", 00:15:59.906 "ffdhe4096", 00:15:59.906 "ffdhe6144", 00:15:59.906 "ffdhe8192" 00:15:59.906 ] 00:15:59.906 } 00:15:59.906 }, 00:15:59.906 { 00:15:59.906 "method": "bdev_nvme_set_hotplug", 00:15:59.906 "params": { 00:15:59.906 "period_us": 100000, 00:15:59.906 "enable": false 00:15:59.906 } 00:15:59.906 }, 00:15:59.906 { 00:15:59.906 "method": "bdev_malloc_create", 00:15:59.906 "params": { 00:15:59.906 "name": "malloc0", 00:15:59.906 "num_blocks": 8192, 00:15:59.906 "block_size": 4096, 00:15:59.906 "physical_block_size": 4096, 00:15:59.906 "uuid": "0b71ecf3-a2fc-4ecb-8246-c9b989e95720", 00:15:59.906 "optimal_io_boundary": 0 00:15:59.906 } 00:15:59.906 }, 00:15:59.906 { 00:15:59.906 "method": "bdev_wait_for_examine" 00:15:59.906 } 00:15:59.906 ] 00:15:59.906 }, 00:15:59.906 { 00:15:59.906 "subsystem": "nbd", 00:15:59.906 "config": [] 00:15:59.906 }, 00:15:59.906 { 00:15:59.906 "subsystem": "scheduler", 00:15:59.906 "config": [ 00:15:59.906 { 00:15:59.906 "method": "framework_set_scheduler", 00:15:59.906 "params": { 00:15:59.906 "name": "static" 00:15:59.906 } 00:15:59.906 } 00:15:59.907 ] 00:15:59.907 }, 00:15:59.907 { 00:15:59.907 "subsystem": "nvmf", 00:15:59.907 "config": [ 00:15:59.907 { 00:15:59.907 "method": "nvmf_set_config", 00:15:59.907 "params": { 00:15:59.907 "discovery_filter": "match_any", 00:15:59.907 "admin_cmd_passthru": { 00:15:59.907 "identify_ctrlr": false 00:15:59.907 } 00:15:59.907 } 00:15:59.907 }, 00:15:59.907 { 00:15:59.907 "method": "nvmf_set_max_subsystems", 00:15:59.907 "params": { 00:15:59.907 "max_subsystems": 1024 00:15:59.907 } 00:15:59.907 }, 00:15:59.907 { 00:15:59.907 "method": "nvmf_set_crdt", 00:15:59.907 "params": { 00:15:59.907 "crdt1": 0, 00:15:59.907 "crdt2": 0, 00:15:59.907 "crdt3": 0 00:15:59.907 } 00:15:59.907 }, 00:15:59.907 { 00:15:59.907 "method": "nvmf_create_transport", 00:15:59.907 "params": { 00:15:59.907 "trtype": "TCP", 00:15:59.907 "max_queue_depth": 128, 00:15:59.907 "max_io_qpairs_per_ctrlr": 127, 00:15:59.907 "in_capsule_data_size": 4096, 00:15:59.907 "max_io_size": 131072, 00:15:59.907 "io_unit_size": 131072, 00:15:59.907 "max_aq_depth": 128, 00:15:59.907 "num_shared_buffers": 511, 00:15:59.907 "buf_cache_size": 4294967295, 00:15:59.907 "dif_insert_or_strip": false, 00:15:59.907 "zcopy": false, 00:15:59.907 "c2h_success": false, 00:15:59.907 "sock_priority": 0, 00:15:59.907 "abort_timeout_sec": 1, 00:15:59.907 "ack_timeout": 0 00:15:59.907 } 00:15:59.907 }, 00:15:59.907 { 00:15:59.907 "method": "nvmf_create_subsystem", 00:15:59.907 "params": { 00:15:59.907 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:59.907 "allow_any_host": false, 00:15:59.907 "serial_number": "00000000000000000000", 00:15:59.907 "model_number": "SPDK bdev Controller", 00:15:59.907 "max_namespaces": 32, 00:15:59.907 "min_cntlid": 1, 00:15:59.907 "max_cntlid": 65519, 00:15:59.907 "ana_reporting": false 00:15:59.907 } 00:15:59.907 }, 00:15:59.907 { 00:15:59.907 "method": "nvmf_subsystem_add_host", 00:15:59.907 "params": { 00:15:59.907 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:59.907 "host": "nqn.2016-06.io.spdk:host1", 00:15:59.907 "psk": "key0" 00:15:59.907 } 00:15:59.907 }, 00:15:59.907 { 00:15:59.907 "method": "nvmf_subsystem_add_ns", 00:15:59.907 "params": { 00:15:59.907 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:59.907 "namespace": { 00:15:59.907 "nsid": 1, 00:15:59.907 "bdev_name": "malloc0", 00:15:59.907 "nguid": "0B71ECF3A2FC4ECB8246C9B989E95720", 00:15:59.907 "uuid": "0b71ecf3-a2fc-4ecb-8246-c9b989e95720", 00:15:59.907 "no_auto_visible": false 00:15:59.907 } 00:15:59.907 } 00:15:59.907 }, 00:15:59.907 { 00:15:59.907 "method": "nvmf_subsystem_add_listener", 00:15:59.907 "params": { 00:15:59.907 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:59.907 "listen_address": { 00:15:59.907 "trtype": "TCP", 00:15:59.907 "adrfam": "IPv4", 00:15:59.907 "traddr": "10.0.0.2", 00:15:59.907 "trsvcid": "4420" 00:15:59.907 }, 00:15:59.907 "secure_channel": true 00:15:59.907 } 00:15:59.907 } 00:15:59.907 ] 00:15:59.907 } 00:15:59.907 ] 00:15:59.907 }' 00:15:59.907 16:08:29 -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:16:00.165 16:08:29 -- target/tls.sh@264 -- # bperfcfg='{ 00:16:00.165 "subsystems": [ 00:16:00.165 { 00:16:00.165 "subsystem": "keyring", 00:16:00.165 "config": [ 00:16:00.165 { 00:16:00.165 "method": "keyring_file_add_key", 00:16:00.165 "params": { 00:16:00.166 "name": "key0", 00:16:00.166 "path": "/tmp/tmp.Y8qtaTnaW9" 00:16:00.166 } 00:16:00.166 } 00:16:00.166 ] 00:16:00.166 }, 00:16:00.166 { 00:16:00.166 "subsystem": "iobuf", 00:16:00.166 "config": [ 00:16:00.166 { 00:16:00.166 "method": "iobuf_set_options", 00:16:00.166 "params": { 00:16:00.166 "small_pool_count": 8192, 00:16:00.166 "large_pool_count": 1024, 00:16:00.166 "small_bufsize": 8192, 00:16:00.166 "large_bufsize": 135168 00:16:00.166 } 00:16:00.166 } 00:16:00.166 ] 00:16:00.166 }, 00:16:00.166 { 00:16:00.166 "subsystem": "sock", 00:16:00.166 "config": [ 00:16:00.166 { 00:16:00.166 "method": "sock_impl_set_options", 00:16:00.166 "params": { 00:16:00.166 "impl_name": "uring", 00:16:00.166 "recv_buf_size": 2097152, 00:16:00.166 "send_buf_size": 2097152, 00:16:00.166 "enable_recv_pipe": true, 00:16:00.166 "enable_quickack": false, 00:16:00.166 "enable_placement_id": 0, 00:16:00.166 "enable_zerocopy_send_server": false, 00:16:00.166 "enable_zerocopy_send_client": false, 00:16:00.166 "zerocopy_threshold": 0, 00:16:00.166 "tls_version": 0, 00:16:00.166 "enable_ktls": false 00:16:00.166 } 00:16:00.166 }, 00:16:00.166 { 00:16:00.166 "method": "sock_impl_set_options", 00:16:00.166 "params": { 00:16:00.166 "impl_name": "posix", 00:16:00.166 "recv_buf_size": 2097152, 00:16:00.166 "send_buf_size": 2097152, 00:16:00.166 "enable_recv_pipe": true, 00:16:00.166 "enable_quickack": false, 00:16:00.166 "enable_placement_id": 0, 00:16:00.166 "enable_zerocopy_send_server": true, 00:16:00.166 "enable_zerocopy_send_client": false, 00:16:00.166 "zerocopy_threshold": 0, 00:16:00.166 "tls_version": 0, 00:16:00.166 "enable_ktls": false 00:16:00.166 } 00:16:00.166 }, 00:16:00.166 { 00:16:00.166 "method": "sock_impl_set_options", 00:16:00.166 "params": { 00:16:00.166 "impl_name": "ssl", 00:16:00.166 "recv_buf_size": 4096, 00:16:00.166 "send_buf_size": 4096, 00:16:00.166 "enable_recv_pipe": true, 00:16:00.166 "enable_quickack": false, 00:16:00.166 "enable_placement_id": 0, 00:16:00.166 "enable_zerocopy_send_server": true, 00:16:00.166 "enable_zerocopy_send_client": false, 00:16:00.166 "zerocopy_threshold": 0, 00:16:00.166 "tls_version": 0, 00:16:00.166 "enable_ktls": false 00:16:00.166 } 00:16:00.166 } 00:16:00.166 ] 00:16:00.166 }, 00:16:00.166 { 00:16:00.166 "subsystem": "vmd", 00:16:00.166 "config": [] 00:16:00.166 }, 00:16:00.166 { 00:16:00.166 "subsystem": "accel", 00:16:00.166 "config": [ 00:16:00.166 { 00:16:00.166 "method": "accel_set_options", 00:16:00.166 "params": { 00:16:00.166 "small_cache_size": 128, 00:16:00.166 "large_cache_size": 16, 00:16:00.166 "task_count": 2048, 00:16:00.166 "sequence_count": 2048, 00:16:00.166 "buf_count": 2048 00:16:00.166 } 00:16:00.166 } 00:16:00.166 ] 00:16:00.166 }, 00:16:00.166 { 00:16:00.166 "subsystem": "bdev", 00:16:00.166 "config": [ 00:16:00.166 { 00:16:00.166 "method": "bdev_set_options", 00:16:00.166 "params": { 00:16:00.166 "bdev_io_pool_size": 65535, 00:16:00.166 "bdev_io_cache_size": 256, 00:16:00.166 "bdev_auto_examine": true, 00:16:00.166 "iobuf_small_cache_size": 128, 00:16:00.166 "iobuf_large_cache_size": 16 00:16:00.166 } 00:16:00.166 }, 00:16:00.166 { 00:16:00.166 "method": "bdev_raid_set_options", 00:16:00.166 "params": { 00:16:00.166 "process_window_size_kb": 1024 00:16:00.166 } 00:16:00.166 }, 00:16:00.166 { 00:16:00.166 "method": "bdev_iscsi_set_options", 00:16:00.166 "params": { 00:16:00.166 "timeout_sec": 30 00:16:00.166 } 00:16:00.166 }, 00:16:00.166 { 00:16:00.166 "method": "bdev_nvme_set_options", 00:16:00.166 "params": { 00:16:00.166 "action_on_timeout": "none", 00:16:00.166 "timeout_us": 0, 00:16:00.166 "timeout_admin_us": 0, 00:16:00.166 "keep_alive_timeout_ms": 10000, 00:16:00.166 "arbitration_burst": 0, 00:16:00.166 "low_priority_weight": 0, 00:16:00.166 "medium_priority_weight": 0, 00:16:00.166 "high_priority_weight": 0, 00:16:00.166 "nvme_adminq_poll_period_us": 10000, 00:16:00.166 "nvme_ioq_poll_period_us": 0, 00:16:00.166 "io_queue_requests": 512, 00:16:00.166 "delay_cmd_submit": true, 00:16:00.166 "transport_retry_count": 4, 00:16:00.166 "bdev_retry_count": 3, 00:16:00.166 "transport_ack_timeout": 0, 00:16:00.166 "ctrlr_loss_timeout_sec": 0, 00:16:00.166 "reconnect_delay_sec": 0, 00:16:00.166 "fast_io_fail_timeout_sec": 0, 00:16:00.166 "disable_auto_failback": false, 00:16:00.166 "generate_uuids": false, 00:16:00.166 "transport_tos": 0, 00:16:00.166 "nvme_error_stat": false, 00:16:00.166 "rdma_srq_size": 0, 00:16:00.166 "io_path_stat": false, 00:16:00.166 "allow_accel_sequence": false, 00:16:00.166 "rdma_max_cq_size": 0, 00:16:00.166 "rdma_cm_event_timeout_ms": 0, 00:16:00.166 "dhchap_digests": [ 00:16:00.166 "sha256", 00:16:00.166 "sha384", 00:16:00.166 "sha512" 00:16:00.166 ], 00:16:00.166 "dhchap_dhgroups": [ 00:16:00.166 "null", 00:16:00.166 "ffdhe2048", 00:16:00.166 "ffdhe3072", 00:16:00.166 "ffdhe4096", 00:16:00.166 "ffdhe6144", 00:16:00.166 "ffdhe8192" 00:16:00.166 ] 00:16:00.166 } 00:16:00.166 }, 00:16:00.166 { 00:16:00.166 "method": "bdev_nvme_attach_controller", 00:16:00.166 "params": { 00:16:00.166 "name": "nvme0", 00:16:00.166 "trtype": "TCP", 00:16:00.166 "adrfam": "IPv4", 00:16:00.166 "traddr": "10.0.0.2", 00:16:00.166 "trsvcid": "4420", 00:16:00.166 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:00.166 "prchk_reftag": false, 00:16:00.166 "prchk_guard": false, 00:16:00.166 "ctrlr_loss_timeout_sec": 0, 00:16:00.166 "reconnect_delay_sec": 0, 00:16:00.166 "fast_io_fail_timeout_sec": 0, 00:16:00.166 "psk": "key0", 00:16:00.166 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:00.166 "hdgst": false, 00:16:00.166 "ddgst": false 00:16:00.166 } 00:16:00.166 }, 00:16:00.166 { 00:16:00.166 "method": "bdev_nvme_set_hotplug", 00:16:00.166 "params": { 00:16:00.166 "period_us": 100000, 00:16:00.166 "enable": false 00:16:00.166 } 00:16:00.166 }, 00:16:00.166 { 00:16:00.166 "method": "bdev_enable_histogram", 00:16:00.166 "params": { 00:16:00.166 "name": "nvme0n1", 00:16:00.166 "enable": true 00:16:00.166 } 00:16:00.166 }, 00:16:00.166 { 00:16:00.166 "method": "bdev_wait_for_examine" 00:16:00.166 } 00:16:00.166 ] 00:16:00.166 }, 00:16:00.166 { 00:16:00.166 "subsystem": "nbd", 00:16:00.166 "config": [] 00:16:00.166 } 00:16:00.166 ] 00:16:00.166 }' 00:16:00.166 16:08:29 -- target/tls.sh@266 -- # killprocess 83158 00:16:00.166 16:08:29 -- common/autotest_common.sh@936 -- # '[' -z 83158 ']' 00:16:00.166 16:08:29 -- common/autotest_common.sh@940 -- # kill -0 83158 00:16:00.166 16:08:29 -- common/autotest_common.sh@941 -- # uname 00:16:00.166 16:08:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:00.166 16:08:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83158 00:16:00.166 killing process with pid 83158 00:16:00.166 Received shutdown signal, test time was about 1.000000 seconds 00:16:00.166 00:16:00.166 Latency(us) 00:16:00.166 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:00.166 =================================================================================================================== 00:16:00.166 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:00.166 16:08:29 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:00.166 16:08:29 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:00.166 16:08:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83158' 00:16:00.166 16:08:29 -- common/autotest_common.sh@955 -- # kill 83158 00:16:00.166 16:08:29 -- common/autotest_common.sh@960 -- # wait 83158 00:16:00.425 16:08:30 -- target/tls.sh@267 -- # killprocess 83133 00:16:00.425 16:08:30 -- common/autotest_common.sh@936 -- # '[' -z 83133 ']' 00:16:00.425 16:08:30 -- common/autotest_common.sh@940 -- # kill -0 83133 00:16:00.425 16:08:30 -- common/autotest_common.sh@941 -- # uname 00:16:00.425 16:08:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:00.426 16:08:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83133 00:16:00.426 16:08:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:00.426 16:08:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:00.426 16:08:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83133' 00:16:00.426 killing process with pid 83133 00:16:00.426 16:08:30 -- common/autotest_common.sh@955 -- # kill 83133 00:16:00.426 16:08:30 -- common/autotest_common.sh@960 -- # wait 83133 00:16:00.685 16:08:30 -- target/tls.sh@269 -- # echo '{ 00:16:00.685 "subsystems": [ 00:16:00.685 { 00:16:00.685 "subsystem": "keyring", 00:16:00.685 "config": [ 00:16:00.685 { 00:16:00.685 "method": "keyring_file_add_key", 00:16:00.685 "params": { 00:16:00.685 "name": "key0", 00:16:00.685 "path": "/tmp/tmp.Y8qtaTnaW9" 00:16:00.685 } 00:16:00.685 } 00:16:00.685 ] 00:16:00.685 }, 00:16:00.685 { 00:16:00.685 "subsystem": "iobuf", 00:16:00.685 "config": [ 00:16:00.685 { 00:16:00.685 "method": "iobuf_set_options", 00:16:00.685 "params": { 00:16:00.685 "small_pool_count": 8192, 00:16:00.685 "large_pool_count": 1024, 00:16:00.685 "small_bufsize": 8192, 00:16:00.685 "large_bufsize": 135168 00:16:00.685 } 00:16:00.685 } 00:16:00.685 ] 00:16:00.685 }, 00:16:00.685 { 00:16:00.685 "subsystem": "sock", 00:16:00.685 "config": [ 00:16:00.685 { 00:16:00.685 "method": "sock_impl_set_options", 00:16:00.685 "params": { 00:16:00.685 "impl_name": "uring", 00:16:00.685 "recv_buf_size": 2097152, 00:16:00.685 "send_buf_size": 2097152, 00:16:00.685 "enable_recv_pipe": true, 00:16:00.685 "enable_quickack": false, 00:16:00.685 "enable_placement_id": 0, 00:16:00.685 "enable_zerocopy_send_server": false, 00:16:00.685 "enable_zerocopy_send_client": false, 00:16:00.685 "zerocopy_threshold": 0, 00:16:00.685 "tls_version": 0, 00:16:00.685 "enable_ktls": false 00:16:00.685 } 00:16:00.685 }, 00:16:00.685 { 00:16:00.685 "method": "sock_impl_set_options", 00:16:00.685 "params": { 00:16:00.685 "impl_name": "posix", 00:16:00.685 "recv_buf_size": 2097152, 00:16:00.685 "send_buf_size": 2097152, 00:16:00.685 "enable_recv_pipe": true, 00:16:00.685 "enable_quickack": false, 00:16:00.685 "enable_placement_id": 0, 00:16:00.685 "enable_zerocopy_send_server": true, 00:16:00.685 "enable_zerocopy_send_client": false, 00:16:00.685 "zerocopy_threshold": 0, 00:16:00.685 "tls_version": 0, 00:16:00.685 "enable_ktls": false 00:16:00.685 } 00:16:00.685 }, 00:16:00.685 { 00:16:00.685 "method": "sock_impl_set_options", 00:16:00.685 "params": { 00:16:00.685 "impl_name": "ssl", 00:16:00.685 "recv_buf_size": 4096, 00:16:00.685 "send_buf_size": 4096, 00:16:00.685 "enable_recv_pipe": true, 00:16:00.685 "enable_quickack": false, 00:16:00.685 "enable_placement_id": 0, 00:16:00.686 "enable_zerocopy_send_server": true, 00:16:00.686 "enable_zerocopy_send_client": false, 00:16:00.686 "zerocopy_threshold": 0, 00:16:00.686 "tls_version": 0, 00:16:00.686 "enable_ktls": false 00:16:00.686 } 00:16:00.686 } 00:16:00.686 ] 00:16:00.686 }, 00:16:00.686 { 00:16:00.686 "subsystem": "vmd", 00:16:00.686 "config": [] 00:16:00.686 }, 00:16:00.686 { 00:16:00.686 "subsystem": "accel", 00:16:00.686 "config": [ 00:16:00.686 { 00:16:00.686 "method": "accel_set_options", 00:16:00.686 "params": { 00:16:00.686 "small_cache_size": 128, 00:16:00.686 "large_cache_size": 16, 00:16:00.686 "task_count": 2048, 00:16:00.686 "sequence_count": 2048, 00:16:00.686 "buf_count": 2048 00:16:00.686 } 00:16:00.686 } 00:16:00.686 ] 00:16:00.686 }, 00:16:00.686 { 00:16:00.686 "subsystem": "bdev", 00:16:00.686 "config": [ 00:16:00.686 { 00:16:00.686 "method": "bdev_set_options", 00:16:00.686 "params": { 00:16:00.686 "bdev_io_pool_size": 65535, 00:16:00.686 "bdev_io_cache_size": 256, 00:16:00.686 "bdev_auto_examine": true, 00:16:00.686 "iobuf_small_cache_size": 128, 00:16:00.686 "iobuf_large_cache_size": 16 00:16:00.686 } 00:16:00.686 }, 00:16:00.686 { 00:16:00.686 "method": "bdev_raid_set_options", 00:16:00.686 "params": { 00:16:00.686 "process_window_size_kb": 1024 00:16:00.686 } 00:16:00.686 }, 00:16:00.686 { 00:16:00.686 "method": "bdev_iscsi_set_options", 00:16:00.686 "params": { 00:16:00.686 "timeout_sec": 30 00:16:00.686 } 00:16:00.686 }, 00:16:00.686 { 00:16:00.686 "method": "bdev_nvme_set_options", 00:16:00.686 "params": { 00:16:00.686 "action_on_timeout": "none", 00:16:00.686 "timeout_us": 0, 00:16:00.686 "timeout_admin_us": 0, 00:16:00.686 "keep_alive_timeout_ms": 10000, 00:16:00.686 "arbitration_burst": 0, 00:16:00.686 "low_priority_weight": 0, 00:16:00.686 "medium_priority_weight": 0, 00:16:00.686 "high_priority_weight": 0, 00:16:00.686 "nvme_adminq_poll_period_us": 10000, 00:16:00.686 "nvme_ioq_poll_period_us": 0, 00:16:00.686 "io_queue_requests": 0, 00:16:00.686 "delay_cmd_submit": true, 00:16:00.686 "transport_retry_count": 4, 00:16:00.686 "bdev_retry_count": 3, 00:16:00.686 "transport_ack_timeout": 0, 00:16:00.686 "ctrlr_loss_timeout_sec": 0, 00:16:00.686 "reconnect_delay_sec": 0, 00:16:00.686 "fast_io_fail_timeout_sec": 0, 00:16:00.686 "disable_auto_failback": false, 00:16:00.686 "generate_uuids": false, 00:16:00.686 "transport_tos": 0, 00:16:00.686 "nvme_error_stat": false, 00:16:00.686 "rdma_srq_size": 0, 00:16:00.686 "io_path_stat": false, 00:16:00.686 "allow_accel_sequence": false, 00:16:00.686 "rdma_max_cq_size": 0, 00:16:00.686 "rdma_cm_event_timeout_ms": 0, 00:16:00.686 "dhchap_digests": [ 00:16:00.686 "sha256", 00:16:00.686 "sha384", 00:16:00.686 "sha512" 00:16:00.686 ], 00:16:00.686 "dhchap_dhgroups": [ 00:16:00.686 "null", 00:16:00.686 "ffdhe2048", 00:16:00.686 "ffdhe3072", 00:16:00.686 "ffdhe4096", 00:16:00.686 "ffdhe6144", 00:16:00.686 "ffdhe8192" 00:16:00.686 ] 00:16:00.686 } 00:16:00.686 }, 00:16:00.686 { 00:16:00.686 "method": "bdev_nvme_set_hotplug", 00:16:00.686 "params": { 00:16:00.686 "period_us": 100000, 00:16:00.686 "enable": false 00:16:00.686 } 00:16:00.686 }, 00:16:00.686 { 00:16:00.686 "method": "bdev_malloc_create", 00:16:00.686 "params": { 00:16:00.686 "name": "malloc0", 00:16:00.686 "num_blocks": 8192, 00:16:00.686 "block_size": 4096, 00:16:00.686 "physical_block_size": 4096, 00:16:00.686 "uuid": "0b71ecf3-a2fc-4ecb-8246-c9b989e95720", 00:16:00.686 "optimal_io_boundary": 0 00:16:00.686 } 00:16:00.686 }, 00:16:00.686 { 00:16:00.686 "method": "bdev_wait_for_examine" 00:16:00.686 } 00:16:00.686 ] 00:16:00.686 }, 00:16:00.686 { 00:16:00.686 "subsystem": "nbd", 00:16:00.686 "config": [] 00:16:00.686 }, 00:16:00.686 { 00:16:00.686 "subsystem": "scheduler", 00:16:00.686 "config": [ 00:16:00.686 { 00:16:00.686 "method": "framework_set_scheduler", 00:16:00.686 "params": { 00:16:00.686 "name": "static" 00:16:00.686 } 00:16:00.686 } 00:16:00.686 ] 00:16:00.686 }, 00:16:00.686 { 00:16:00.686 "subsystem": "nvmf", 00:16:00.686 "config": [ 00:16:00.686 { 00:16:00.686 "method": "nvmf_set_config", 00:16:00.686 "params": { 00:16:00.686 "discovery_filter": "match_any", 00:16:00.686 "admin_cmd_passthru": { 00:16:00.686 "identify_ctrlr": false 00:16:00.686 } 00:16:00.686 } 00:16:00.686 }, 00:16:00.686 { 00:16:00.686 "method": "nvmf_set_max_subsystems", 00:16:00.686 "params": { 00:16:00.686 "max_subsystems": 1024 00:16:00.686 } 00:16:00.686 }, 00:16:00.686 { 00:16:00.686 "method": "nvmf_set_crdt", 00:16:00.686 "params": { 00:16:00.686 "crdt1": 0, 00:16:00.686 "crdt2": 0, 00:16:00.686 "crdt3": 0 00:16:00.686 } 00:16:00.686 }, 00:16:00.686 { 00:16:00.686 "method": "nvmf_create_transport", 00:16:00.686 "params": { 00:16:00.686 "trtype": "TCP", 00:16:00.686 "max_queue_depth": 128, 00:16:00.686 "max_io_qpairs_per_ctrlr": 127, 00:16:00.686 "in_capsule_data_size": 4096, 00:16:00.686 "max_io_size": 131072, 00:16:00.686 "io_unit_size": 131072, 00:16:00.686 "max_aq_depth": 128, 00:16:00.686 "num_shared_buffers": 511, 00:16:00.686 "buf_cache_size": 42 16:08:30 -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:16:00.686 94967295, 00:16:00.686 "dif_insert_or_strip": false, 00:16:00.686 "zcopy": false, 00:16:00.686 "c2h_success": false, 00:16:00.686 "sock_priority": 0, 00:16:00.686 "abort_timeout_sec": 1, 00:16:00.686 "ack_timeout": 0 00:16:00.686 } 00:16:00.686 }, 00:16:00.686 { 00:16:00.686 "method": "nvmf_create_subsystem", 00:16:00.686 "params": { 00:16:00.686 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:00.686 "allow_any_host": false, 00:16:00.686 "serial_number": "00000000000000000000", 00:16:00.686 "model_number": "SPDK bdev Controller", 00:16:00.686 "max_namespaces": 32, 00:16:00.686 "min_cntlid": 1, 00:16:00.686 "max_cntlid": 65519, 00:16:00.686 "ana_reporting": false 00:16:00.686 } 00:16:00.686 }, 00:16:00.686 { 00:16:00.686 "method": "nvmf_subsystem_add_host", 00:16:00.686 "params": { 00:16:00.686 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:00.686 "host": "nqn.2016-06.io.spdk:host1", 00:16:00.686 "psk": "key0" 00:16:00.686 } 00:16:00.686 }, 00:16:00.686 { 00:16:00.686 "method": "nvmf_subsystem_add_ns", 00:16:00.686 "params": { 00:16:00.686 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:00.686 "namespace": { 00:16:00.686 "nsid": 1, 00:16:00.686 "bdev_name": "malloc0", 00:16:00.686 "nguid": "0B71ECF3A2FC4ECB8246C9B989E95720", 00:16:00.686 "uuid": "0b71ecf3-a2fc-4ecb-8246-c9b989e95720", 00:16:00.686 "no_auto_visible": false 00:16:00.686 } 00:16:00.686 } 00:16:00.686 }, 00:16:00.686 { 00:16:00.686 "method": "nvmf_subsystem_add_listener", 00:16:00.686 "params": { 00:16:00.686 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:00.686 "listen_address": { 00:16:00.686 "trtype": "TCP", 00:16:00.686 "adrfam": "IPv4", 00:16:00.686 "traddr": "10.0.0.2", 00:16:00.686 "trsvcid": "4420" 00:16:00.686 }, 00:16:00.686 "secure_channel": true 00:16:00.686 } 00:16:00.686 } 00:16:00.686 ] 00:16:00.686 } 00:16:00.686 ] 00:16:00.686 }' 00:16:00.686 16:08:30 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:00.686 16:08:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:00.686 16:08:30 -- common/autotest_common.sh@10 -- # set +x 00:16:00.686 16:08:30 -- nvmf/common.sh@470 -- # nvmfpid=83211 00:16:00.686 16:08:30 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:16:00.686 16:08:30 -- nvmf/common.sh@471 -- # waitforlisten 83211 00:16:00.686 16:08:30 -- common/autotest_common.sh@817 -- # '[' -z 83211 ']' 00:16:00.686 16:08:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:00.686 16:08:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:00.686 16:08:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:00.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:00.686 16:08:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:00.686 16:08:30 -- common/autotest_common.sh@10 -- # set +x 00:16:00.686 [2024-04-15 16:08:30.479656] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:16:00.686 [2024-04-15 16:08:30.479992] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:00.686 [2024-04-15 16:08:30.628147] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.945 [2024-04-15 16:08:30.676186] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:00.945 [2024-04-15 16:08:30.676439] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:00.945 [2024-04-15 16:08:30.676548] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:00.945 [2024-04-15 16:08:30.676633] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:00.945 [2024-04-15 16:08:30.676663] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:00.945 [2024-04-15 16:08:30.676774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:00.945 [2024-04-15 16:08:30.888435] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:01.204 [2024-04-15 16:08:30.920396] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:01.204 [2024-04-15 16:08:30.920770] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:01.773 16:08:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:01.773 16:08:31 -- common/autotest_common.sh@850 -- # return 0 00:16:01.773 16:08:31 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:01.773 16:08:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:01.773 16:08:31 -- common/autotest_common.sh@10 -- # set +x 00:16:01.773 16:08:31 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:01.773 16:08:31 -- target/tls.sh@272 -- # bdevperf_pid=83243 00:16:01.773 16:08:31 -- target/tls.sh@273 -- # waitforlisten 83243 /var/tmp/bdevperf.sock 00:16:01.773 16:08:31 -- common/autotest_common.sh@817 -- # '[' -z 83243 ']' 00:16:01.773 16:08:31 -- target/tls.sh@270 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:16:01.773 16:08:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:01.773 16:08:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:01.773 16:08:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:01.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:01.773 16:08:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:01.773 16:08:31 -- common/autotest_common.sh@10 -- # set +x 00:16:01.773 16:08:31 -- target/tls.sh@270 -- # echo '{ 00:16:01.773 "subsystems": [ 00:16:01.773 { 00:16:01.773 "subsystem": "keyring", 00:16:01.773 "config": [ 00:16:01.773 { 00:16:01.773 "method": "keyring_file_add_key", 00:16:01.773 "params": { 00:16:01.773 "name": "key0", 00:16:01.773 "path": "/tmp/tmp.Y8qtaTnaW9" 00:16:01.773 } 00:16:01.773 } 00:16:01.774 ] 00:16:01.774 }, 00:16:01.774 { 00:16:01.774 "subsystem": "iobuf", 00:16:01.774 "config": [ 00:16:01.774 { 00:16:01.774 "method": "iobuf_set_options", 00:16:01.774 "params": { 00:16:01.774 "small_pool_count": 8192, 00:16:01.774 "large_pool_count": 1024, 00:16:01.774 "small_bufsize": 8192, 00:16:01.774 "large_bufsize": 135168 00:16:01.774 } 00:16:01.774 } 00:16:01.774 ] 00:16:01.774 }, 00:16:01.774 { 00:16:01.774 "subsystem": "sock", 00:16:01.774 "config": [ 00:16:01.774 { 00:16:01.774 "method": "sock_impl_set_options", 00:16:01.774 "params": { 00:16:01.774 "impl_name": "uring", 00:16:01.774 "recv_buf_size": 2097152, 00:16:01.774 "send_buf_size": 2097152, 00:16:01.774 "enable_recv_pipe": true, 00:16:01.774 "enable_quickack": false, 00:16:01.774 "enable_placement_id": 0, 00:16:01.774 "enable_zerocopy_send_server": false, 00:16:01.774 "enable_zerocopy_send_client": false, 00:16:01.774 "zerocopy_threshold": 0, 00:16:01.774 "tls_version": 0, 00:16:01.774 "enable_ktls": false 00:16:01.774 } 00:16:01.774 }, 00:16:01.774 { 00:16:01.774 "method": "sock_impl_set_options", 00:16:01.774 "params": { 00:16:01.774 "impl_name": "posix", 00:16:01.774 "recv_buf_size": 2097152, 00:16:01.774 "send_buf_size": 2097152, 00:16:01.774 "enable_recv_pipe": true, 00:16:01.774 "enable_quickack": false, 00:16:01.774 "enable_placement_id": 0, 00:16:01.774 "enable_zerocopy_send_server": true, 00:16:01.774 "enable_zerocopy_send_client": false, 00:16:01.774 "zerocopy_threshold": 0, 00:16:01.774 "tls_version": 0, 00:16:01.774 "enable_ktls": false 00:16:01.774 } 00:16:01.774 }, 00:16:01.774 { 00:16:01.774 "method": "sock_impl_set_options", 00:16:01.774 "params": { 00:16:01.774 "impl_name": "ssl", 00:16:01.774 "recv_buf_size": 4096, 00:16:01.774 "send_buf_size": 4096, 00:16:01.774 "enable_recv_pipe": true, 00:16:01.774 "enable_quickack": false, 00:16:01.774 "enable_placement_id": 0, 00:16:01.774 "enable_zerocopy_send_server": true, 00:16:01.774 "enable_zerocopy_send_client": false, 00:16:01.774 "zerocopy_threshold": 0, 00:16:01.774 "tls_version": 0, 00:16:01.774 "enable_ktls": false 00:16:01.774 } 00:16:01.774 } 00:16:01.774 ] 00:16:01.774 }, 00:16:01.774 { 00:16:01.774 "subsystem": "vmd", 00:16:01.774 "config": [] 00:16:01.774 }, 00:16:01.774 { 00:16:01.774 "subsystem": "accel", 00:16:01.774 "config": [ 00:16:01.774 { 00:16:01.774 "method": "accel_set_options", 00:16:01.774 "params": { 00:16:01.774 "small_cache_size": 128, 00:16:01.774 "large_cache_size": 16, 00:16:01.774 "task_count": 2048, 00:16:01.774 "sequence_count": 2048, 00:16:01.774 "buf_count": 2048 00:16:01.774 } 00:16:01.774 } 00:16:01.774 ] 00:16:01.774 }, 00:16:01.774 { 00:16:01.774 "subsystem": "bdev", 00:16:01.774 "config": [ 00:16:01.774 { 00:16:01.774 "method": "bdev_set_options", 00:16:01.774 "params": { 00:16:01.774 "bdev_io_pool_size": 65535, 00:16:01.774 "bdev_io_cache_size": 256, 00:16:01.774 "bdev_auto_examine": true, 00:16:01.774 "iobuf_small_cache_size": 128, 00:16:01.774 "iobuf_large_cache_size": 16 00:16:01.774 } 00:16:01.774 }, 00:16:01.774 { 00:16:01.774 "method": "bdev_raid_set_options", 00:16:01.774 "params": { 00:16:01.774 "process_window_size_kb": 1024 00:16:01.774 } 00:16:01.774 }, 00:16:01.774 { 00:16:01.774 "method": "bdev_iscsi_set_options", 00:16:01.774 "params": { 00:16:01.774 "timeout_sec": 30 00:16:01.774 } 00:16:01.774 }, 00:16:01.774 { 00:16:01.774 "method": "bdev_nvme_set_options", 00:16:01.774 "params": { 00:16:01.774 "action_on_timeout": "none", 00:16:01.774 "timeout_us": 0, 00:16:01.774 "timeout_admin_us": 0, 00:16:01.774 "keep_alive_timeout_ms": 10000, 00:16:01.774 "arbitration_burst": 0, 00:16:01.774 "low_priority_weight": 0, 00:16:01.774 "medium_priority_weight": 0, 00:16:01.774 "high_priority_weight": 0, 00:16:01.774 "nvme_adminq_poll_period_us": 10000, 00:16:01.774 "nvme_ioq_poll_period_us": 0, 00:16:01.774 "io_queue_requests": 512, 00:16:01.774 "delay_cmd_submit": true, 00:16:01.774 "transport_retry_count": 4, 00:16:01.774 "bdev_retry_count": 3, 00:16:01.774 "transport_ack_timeout": 0, 00:16:01.774 "ctrlr_loss_timeout_sec": 0, 00:16:01.774 "reconnect_delay_sec": 0, 00:16:01.774 "fast_io_fail_timeout_sec": 0, 00:16:01.774 "disable_auto_failback": false, 00:16:01.774 "generate_uuids": false, 00:16:01.774 "transport_tos": 0, 00:16:01.774 "nvme_error_stat": false, 00:16:01.774 "rdma_srq_size": 0, 00:16:01.774 "io_path_stat": false, 00:16:01.774 "allow_accel_sequence": false, 00:16:01.774 "rdma_max_cq_size": 0, 00:16:01.774 "rdma_cm_event_timeout_ms": 0, 00:16:01.774 "dhchap_digests": [ 00:16:01.774 "sha256", 00:16:01.774 "sha384", 00:16:01.774 "sha512" 00:16:01.774 ], 00:16:01.774 "dhchap_dhgroups": [ 00:16:01.774 "null", 00:16:01.774 "ffdhe2048", 00:16:01.774 "ffdhe3072", 00:16:01.774 "ffdhe4096", 00:16:01.774 "ffdhe6144", 00:16:01.774 "ffdhe8192" 00:16:01.774 ] 00:16:01.774 } 00:16:01.774 }, 00:16:01.774 { 00:16:01.774 "method": "bdev_nvme_attach_controller", 00:16:01.774 "params": { 00:16:01.774 "name": "nvme0", 00:16:01.774 "trtype": "TCP", 00:16:01.774 "adrfam": "IPv4", 00:16:01.774 "traddr": "10.0.0.2", 00:16:01.774 "trsvcid": "4420", 00:16:01.774 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:01.774 "prchk_reftag": false, 00:16:01.774 "prchk_guard": false, 00:16:01.774 "ctrlr_loss_timeout_sec": 0, 00:16:01.774 "reconnect_delay_sec": 0, 00:16:01.774 "fast_io_fail_timeout_sec": 0, 00:16:01.774 "psk": "key0", 00:16:01.774 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:01.774 "hdgst": false, 00:16:01.774 "ddgst": false 00:16:01.774 } 00:16:01.774 }, 00:16:01.774 { 00:16:01.774 "method": "bdev_nvme_set_hotplug", 00:16:01.774 "params": { 00:16:01.774 "period_us": 100000, 00:16:01.774 "enable": false 00:16:01.774 } 00:16:01.774 }, 00:16:01.774 { 00:16:01.774 "method": "bdev_enable_histogram", 00:16:01.774 "params": { 00:16:01.774 "name": "nvme0n1", 00:16:01.774 "enable": true 00:16:01.774 } 00:16:01.774 }, 00:16:01.774 { 00:16:01.774 "method": "bdev_wait_for_examine" 00:16:01.774 } 00:16:01.774 ] 00:16:01.774 }, 00:16:01.774 { 00:16:01.774 "subsystem": "nbd", 00:16:01.774 "config": [] 00:16:01.774 } 00:16:01.774 ] 00:16:01.774 }' 00:16:01.774 [2024-04-15 16:08:31.616168] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:16:01.774 [2024-04-15 16:08:31.616475] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83243 ] 00:16:02.050 [2024-04-15 16:08:31.766881] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:02.050 [2024-04-15 16:08:31.819356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:02.050 [2024-04-15 16:08:31.979645] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:02.618 16:08:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:02.618 16:08:32 -- common/autotest_common.sh@850 -- # return 0 00:16:02.618 16:08:32 -- target/tls.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:02.618 16:08:32 -- target/tls.sh@275 -- # jq -r '.[].name' 00:16:03.184 16:08:32 -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.184 16:08:32 -- target/tls.sh@276 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:03.184 Running I/O for 1 seconds... 00:16:04.560 00:16:04.560 Latency(us) 00:16:04.560 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:04.560 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:04.560 Verification LBA range: start 0x0 length 0x2000 00:16:04.560 nvme0n1 : 1.01 5607.59 21.90 0.00 0.00 22662.38 3963.37 16976.94 00:16:04.560 =================================================================================================================== 00:16:04.560 Total : 5607.59 21.90 0.00 0.00 22662.38 3963.37 16976.94 00:16:04.560 0 00:16:04.560 16:08:34 -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:16:04.560 16:08:34 -- target/tls.sh@279 -- # cleanup 00:16:04.560 16:08:34 -- target/tls.sh@15 -- # process_shm --id 0 00:16:04.560 16:08:34 -- common/autotest_common.sh@794 -- # type=--id 00:16:04.560 16:08:34 -- common/autotest_common.sh@795 -- # id=0 00:16:04.560 16:08:34 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:16:04.560 16:08:34 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:04.560 16:08:34 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:16:04.560 16:08:34 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:16:04.560 16:08:34 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:16:04.560 16:08:34 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:04.560 nvmf_trace.0 00:16:04.560 16:08:34 -- common/autotest_common.sh@809 -- # return 0 00:16:04.560 16:08:34 -- target/tls.sh@16 -- # killprocess 83243 00:16:04.560 16:08:34 -- common/autotest_common.sh@936 -- # '[' -z 83243 ']' 00:16:04.560 16:08:34 -- common/autotest_common.sh@940 -- # kill -0 83243 00:16:04.560 16:08:34 -- common/autotest_common.sh@941 -- # uname 00:16:04.560 16:08:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:04.560 16:08:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83243 00:16:04.560 16:08:34 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:04.560 16:08:34 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:04.560 16:08:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83243' 00:16:04.560 killing process with pid 83243 00:16:04.560 16:08:34 -- common/autotest_common.sh@955 -- # kill 83243 00:16:04.560 Received shutdown signal, test time was about 1.000000 seconds 00:16:04.560 00:16:04.560 Latency(us) 00:16:04.560 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:04.560 =================================================================================================================== 00:16:04.560 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:04.560 16:08:34 -- common/autotest_common.sh@960 -- # wait 83243 00:16:04.560 16:08:34 -- target/tls.sh@17 -- # nvmftestfini 00:16:04.560 16:08:34 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:04.560 16:08:34 -- nvmf/common.sh@117 -- # sync 00:16:04.818 16:08:34 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:04.818 16:08:34 -- nvmf/common.sh@120 -- # set +e 00:16:04.818 16:08:34 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:04.818 16:08:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:04.818 rmmod nvme_tcp 00:16:04.818 rmmod nvme_fabrics 00:16:04.818 16:08:34 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:04.818 16:08:34 -- nvmf/common.sh@124 -- # set -e 00:16:04.818 16:08:34 -- nvmf/common.sh@125 -- # return 0 00:16:04.818 16:08:34 -- nvmf/common.sh@478 -- # '[' -n 83211 ']' 00:16:04.818 16:08:34 -- nvmf/common.sh@479 -- # killprocess 83211 00:16:04.818 16:08:34 -- common/autotest_common.sh@936 -- # '[' -z 83211 ']' 00:16:04.818 16:08:34 -- common/autotest_common.sh@940 -- # kill -0 83211 00:16:04.818 16:08:34 -- common/autotest_common.sh@941 -- # uname 00:16:04.818 16:08:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:04.818 16:08:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83211 00:16:04.818 16:08:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:04.818 16:08:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:04.818 16:08:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83211' 00:16:04.818 killing process with pid 83211 00:16:04.818 16:08:34 -- common/autotest_common.sh@955 -- # kill 83211 00:16:04.818 16:08:34 -- common/autotest_common.sh@960 -- # wait 83211 00:16:05.076 16:08:34 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:05.076 16:08:34 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:05.076 16:08:34 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:05.076 16:08:34 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:05.076 16:08:34 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:05.076 16:08:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:05.076 16:08:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:05.076 16:08:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:05.076 16:08:34 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:05.076 16:08:34 -- target/tls.sh@18 -- # rm -f /tmp/tmp.fCnSqI4vdS /tmp/tmp.bH8QyXnFOQ /tmp/tmp.Y8qtaTnaW9 00:16:05.076 00:16:05.076 real 1m21.261s 00:16:05.076 user 2m11.217s 00:16:05.076 sys 0m25.879s 00:16:05.076 16:08:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:05.076 16:08:34 -- common/autotest_common.sh@10 -- # set +x 00:16:05.076 ************************************ 00:16:05.076 END TEST nvmf_tls 00:16:05.076 ************************************ 00:16:05.076 16:08:34 -- nvmf/nvmf.sh@61 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:16:05.076 16:08:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:05.076 16:08:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:05.076 16:08:34 -- common/autotest_common.sh@10 -- # set +x 00:16:05.076 ************************************ 00:16:05.076 START TEST nvmf_fips 00:16:05.076 ************************************ 00:16:05.076 16:08:35 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:16:05.335 * Looking for test storage... 00:16:05.335 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:16:05.335 16:08:35 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:05.335 16:08:35 -- nvmf/common.sh@7 -- # uname -s 00:16:05.335 16:08:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:05.335 16:08:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:05.335 16:08:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:05.335 16:08:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:05.335 16:08:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:05.335 16:08:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:05.335 16:08:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:05.335 16:08:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:05.335 16:08:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:05.335 16:08:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:05.335 16:08:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5cf30adc-e0ee-4255-9853-178d7d30983a 00:16:05.335 16:08:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=5cf30adc-e0ee-4255-9853-178d7d30983a 00:16:05.335 16:08:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:05.335 16:08:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:05.335 16:08:35 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:05.335 16:08:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:05.335 16:08:35 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:05.335 16:08:35 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:05.335 16:08:35 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:05.335 16:08:35 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:05.335 16:08:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.335 16:08:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.335 16:08:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.335 16:08:35 -- paths/export.sh@5 -- # export PATH 00:16:05.335 16:08:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.335 16:08:35 -- nvmf/common.sh@47 -- # : 0 00:16:05.335 16:08:35 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:05.335 16:08:35 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:05.335 16:08:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:05.335 16:08:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:05.335 16:08:35 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:05.335 16:08:35 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:05.335 16:08:35 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:05.335 16:08:35 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:05.335 16:08:35 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:05.335 16:08:35 -- fips/fips.sh@89 -- # check_openssl_version 00:16:05.335 16:08:35 -- fips/fips.sh@83 -- # local target=3.0.0 00:16:05.335 16:08:35 -- fips/fips.sh@85 -- # openssl version 00:16:05.335 16:08:35 -- fips/fips.sh@85 -- # awk '{print $2}' 00:16:05.335 16:08:35 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:16:05.335 16:08:35 -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:16:05.335 16:08:35 -- scripts/common.sh@330 -- # local ver1 ver1_l 00:16:05.335 16:08:35 -- scripts/common.sh@331 -- # local ver2 ver2_l 00:16:05.335 16:08:35 -- scripts/common.sh@333 -- # IFS=.-: 00:16:05.335 16:08:35 -- scripts/common.sh@333 -- # read -ra ver1 00:16:05.335 16:08:35 -- scripts/common.sh@334 -- # IFS=.-: 00:16:05.335 16:08:35 -- scripts/common.sh@334 -- # read -ra ver2 00:16:05.335 16:08:35 -- scripts/common.sh@335 -- # local 'op=>=' 00:16:05.335 16:08:35 -- scripts/common.sh@337 -- # ver1_l=3 00:16:05.335 16:08:35 -- scripts/common.sh@338 -- # ver2_l=3 00:16:05.335 16:08:35 -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:16:05.335 16:08:35 -- scripts/common.sh@341 -- # case "$op" in 00:16:05.335 16:08:35 -- scripts/common.sh@345 -- # : 1 00:16:05.335 16:08:35 -- scripts/common.sh@361 -- # (( v = 0 )) 00:16:05.335 16:08:35 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:05.335 16:08:35 -- scripts/common.sh@362 -- # decimal 3 00:16:05.335 16:08:35 -- scripts/common.sh@350 -- # local d=3 00:16:05.335 16:08:35 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:16:05.335 16:08:35 -- scripts/common.sh@352 -- # echo 3 00:16:05.335 16:08:35 -- scripts/common.sh@362 -- # ver1[v]=3 00:16:05.335 16:08:35 -- scripts/common.sh@363 -- # decimal 3 00:16:05.335 16:08:35 -- scripts/common.sh@350 -- # local d=3 00:16:05.335 16:08:35 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:16:05.335 16:08:35 -- scripts/common.sh@352 -- # echo 3 00:16:05.335 16:08:35 -- scripts/common.sh@363 -- # ver2[v]=3 00:16:05.335 16:08:35 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:16:05.335 16:08:35 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:16:05.335 16:08:35 -- scripts/common.sh@361 -- # (( v++ )) 00:16:05.335 16:08:35 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:05.335 16:08:35 -- scripts/common.sh@362 -- # decimal 0 00:16:05.335 16:08:35 -- scripts/common.sh@350 -- # local d=0 00:16:05.335 16:08:35 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:16:05.335 16:08:35 -- scripts/common.sh@352 -- # echo 0 00:16:05.335 16:08:35 -- scripts/common.sh@362 -- # ver1[v]=0 00:16:05.335 16:08:35 -- scripts/common.sh@363 -- # decimal 0 00:16:05.335 16:08:35 -- scripts/common.sh@350 -- # local d=0 00:16:05.335 16:08:35 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:16:05.335 16:08:35 -- scripts/common.sh@352 -- # echo 0 00:16:05.335 16:08:35 -- scripts/common.sh@363 -- # ver2[v]=0 00:16:05.335 16:08:35 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:16:05.335 16:08:35 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:16:05.335 16:08:35 -- scripts/common.sh@361 -- # (( v++ )) 00:16:05.335 16:08:35 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:05.335 16:08:35 -- scripts/common.sh@362 -- # decimal 9 00:16:05.335 16:08:35 -- scripts/common.sh@350 -- # local d=9 00:16:05.335 16:08:35 -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:16:05.335 16:08:35 -- scripts/common.sh@352 -- # echo 9 00:16:05.335 16:08:35 -- scripts/common.sh@362 -- # ver1[v]=9 00:16:05.335 16:08:35 -- scripts/common.sh@363 -- # decimal 0 00:16:05.335 16:08:35 -- scripts/common.sh@350 -- # local d=0 00:16:05.335 16:08:35 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:16:05.335 16:08:35 -- scripts/common.sh@352 -- # echo 0 00:16:05.335 16:08:35 -- scripts/common.sh@363 -- # ver2[v]=0 00:16:05.335 16:08:35 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:16:05.335 16:08:35 -- scripts/common.sh@364 -- # return 0 00:16:05.335 16:08:35 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:16:05.335 16:08:35 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:16:05.335 16:08:35 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:16:05.335 16:08:35 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:16:05.335 16:08:35 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:16:05.335 16:08:35 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:16:05.335 16:08:35 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:16:05.335 16:08:35 -- fips/fips.sh@113 -- # build_openssl_config 00:16:05.335 16:08:35 -- fips/fips.sh@37 -- # cat 00:16:05.335 16:08:35 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:16:05.335 16:08:35 -- fips/fips.sh@58 -- # cat - 00:16:05.335 16:08:35 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:16:05.335 16:08:35 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:16:05.335 16:08:35 -- fips/fips.sh@116 -- # mapfile -t providers 00:16:05.335 16:08:35 -- fips/fips.sh@116 -- # openssl list -providers 00:16:05.335 16:08:35 -- fips/fips.sh@116 -- # grep name 00:16:05.595 16:08:35 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:16:05.595 16:08:35 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:16:05.595 16:08:35 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:16:05.595 16:08:35 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:16:05.595 16:08:35 -- fips/fips.sh@127 -- # : 00:16:05.595 16:08:35 -- common/autotest_common.sh@638 -- # local es=0 00:16:05.595 16:08:35 -- common/autotest_common.sh@640 -- # valid_exec_arg openssl md5 /dev/fd/62 00:16:05.595 16:08:35 -- common/autotest_common.sh@626 -- # local arg=openssl 00:16:05.595 16:08:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:05.595 16:08:35 -- common/autotest_common.sh@630 -- # type -t openssl 00:16:05.595 16:08:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:05.595 16:08:35 -- common/autotest_common.sh@632 -- # type -P openssl 00:16:05.595 16:08:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:05.595 16:08:35 -- common/autotest_common.sh@632 -- # arg=/usr/bin/openssl 00:16:05.595 16:08:35 -- common/autotest_common.sh@632 -- # [[ -x /usr/bin/openssl ]] 00:16:05.595 16:08:35 -- common/autotest_common.sh@641 -- # openssl md5 /dev/fd/62 00:16:05.595 Error setting digest 00:16:05.595 00C2C5B6297F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:16:05.595 00C2C5B6297F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:16:05.595 16:08:35 -- common/autotest_common.sh@641 -- # es=1 00:16:05.595 16:08:35 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:05.595 16:08:35 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:05.595 16:08:35 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:05.595 16:08:35 -- fips/fips.sh@130 -- # nvmftestinit 00:16:05.595 16:08:35 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:05.595 16:08:35 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:05.595 16:08:35 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:05.595 16:08:35 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:05.595 16:08:35 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:05.595 16:08:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:05.595 16:08:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:05.595 16:08:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:05.595 16:08:35 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:16:05.595 16:08:35 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:16:05.595 16:08:35 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:16:05.595 16:08:35 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:16:05.595 16:08:35 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:16:05.595 16:08:35 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:16:05.595 16:08:35 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:05.595 16:08:35 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:05.595 16:08:35 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:05.595 16:08:35 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:05.595 16:08:35 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:05.595 16:08:35 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:05.595 16:08:35 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:05.595 16:08:35 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:05.595 16:08:35 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:05.595 16:08:35 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:05.595 16:08:35 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:05.595 16:08:35 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:05.595 16:08:35 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:05.595 16:08:35 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:05.595 Cannot find device "nvmf_tgt_br" 00:16:05.595 16:08:35 -- nvmf/common.sh@155 -- # true 00:16:05.595 16:08:35 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:05.595 Cannot find device "nvmf_tgt_br2" 00:16:05.595 16:08:35 -- nvmf/common.sh@156 -- # true 00:16:05.595 16:08:35 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:05.595 16:08:35 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:05.595 Cannot find device "nvmf_tgt_br" 00:16:05.595 16:08:35 -- nvmf/common.sh@158 -- # true 00:16:05.595 16:08:35 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:05.595 Cannot find device "nvmf_tgt_br2" 00:16:05.595 16:08:35 -- nvmf/common.sh@159 -- # true 00:16:05.595 16:08:35 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:05.595 16:08:35 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:05.595 16:08:35 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:05.595 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:05.595 16:08:35 -- nvmf/common.sh@162 -- # true 00:16:05.595 16:08:35 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:05.595 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:05.595 16:08:35 -- nvmf/common.sh@163 -- # true 00:16:05.595 16:08:35 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:05.595 16:08:35 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:05.595 16:08:35 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:05.595 16:08:35 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:05.595 16:08:35 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:05.595 16:08:35 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:05.854 16:08:35 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:05.854 16:08:35 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:05.854 16:08:35 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:05.854 16:08:35 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:05.854 16:08:35 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:05.854 16:08:35 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:05.854 16:08:35 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:05.854 16:08:35 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:05.854 16:08:35 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:05.854 16:08:35 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:05.854 16:08:35 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:05.854 16:08:35 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:05.854 16:08:35 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:05.854 16:08:35 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:05.854 16:08:35 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:05.854 16:08:35 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:05.854 16:08:35 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:05.854 16:08:35 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:05.854 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:05.854 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 00:16:05.854 00:16:05.854 --- 10.0.0.2 ping statistics --- 00:16:05.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:05.854 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:16:05.854 16:08:35 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:05.854 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:05.854 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:16:05.854 00:16:05.854 --- 10.0.0.3 ping statistics --- 00:16:05.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:05.854 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:16:05.854 16:08:35 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:05.854 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:05.854 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:16:05.854 00:16:05.854 --- 10.0.0.1 ping statistics --- 00:16:05.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:05.854 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:16:05.854 16:08:35 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:05.854 16:08:35 -- nvmf/common.sh@422 -- # return 0 00:16:05.854 16:08:35 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:05.854 16:08:35 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:05.854 16:08:35 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:05.854 16:08:35 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:05.854 16:08:35 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:05.854 16:08:35 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:05.854 16:08:35 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:05.854 16:08:35 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:16:05.854 16:08:35 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:05.854 16:08:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:05.854 16:08:35 -- common/autotest_common.sh@10 -- # set +x 00:16:05.854 16:08:35 -- nvmf/common.sh@470 -- # nvmfpid=83517 00:16:05.854 16:08:35 -- nvmf/common.sh@471 -- # waitforlisten 83517 00:16:05.854 16:08:35 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:05.854 16:08:35 -- common/autotest_common.sh@817 -- # '[' -z 83517 ']' 00:16:05.854 16:08:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:05.854 16:08:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:05.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:05.854 16:08:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:05.854 16:08:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:05.854 16:08:35 -- common/autotest_common.sh@10 -- # set +x 00:16:06.112 [2024-04-15 16:08:35.833449] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:16:06.112 [2024-04-15 16:08:35.833769] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:06.112 [2024-04-15 16:08:35.983609] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.112 [2024-04-15 16:08:36.035154] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:06.112 [2024-04-15 16:08:36.035439] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:06.112 [2024-04-15 16:08:36.035658] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:06.112 [2024-04-15 16:08:36.035911] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:06.112 [2024-04-15 16:08:36.035955] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:06.113 [2024-04-15 16:08:36.036091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:07.045 16:08:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:07.045 16:08:36 -- common/autotest_common.sh@850 -- # return 0 00:16:07.045 16:08:36 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:07.045 16:08:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:07.045 16:08:36 -- common/autotest_common.sh@10 -- # set +x 00:16:07.045 16:08:36 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:07.045 16:08:36 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:16:07.045 16:08:36 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:16:07.045 16:08:36 -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:07.045 16:08:36 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:16:07.045 16:08:36 -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:07.045 16:08:36 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:07.045 16:08:36 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:07.045 16:08:36 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:07.045 [2024-04-15 16:08:36.987059] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:07.045 [2024-04-15 16:08:37.003007] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:07.045 [2024-04-15 16:08:37.003353] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:07.333 [2024-04-15 16:08:37.032293] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:07.333 malloc0 00:16:07.333 16:08:37 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:07.333 16:08:37 -- fips/fips.sh@147 -- # bdevperf_pid=83551 00:16:07.333 16:08:37 -- fips/fips.sh@148 -- # waitforlisten 83551 /var/tmp/bdevperf.sock 00:16:07.333 16:08:37 -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:07.333 16:08:37 -- common/autotest_common.sh@817 -- # '[' -z 83551 ']' 00:16:07.333 16:08:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:07.333 16:08:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:07.333 16:08:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:07.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:07.333 16:08:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:07.333 16:08:37 -- common/autotest_common.sh@10 -- # set +x 00:16:07.333 [2024-04-15 16:08:37.119760] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:16:07.333 [2024-04-15 16:08:37.120019] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83551 ] 00:16:07.333 [2024-04-15 16:08:37.262601] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:07.590 [2024-04-15 16:08:37.316151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:08.156 16:08:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:08.156 16:08:38 -- common/autotest_common.sh@850 -- # return 0 00:16:08.157 16:08:38 -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:08.415 [2024-04-15 16:08:38.321113] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:08.415 [2024-04-15 16:08:38.321464] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:08.673 TLSTESTn1 00:16:08.673 16:08:38 -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:08.673 Running I/O for 10 seconds... 00:16:18.641 00:16:18.641 Latency(us) 00:16:18.641 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:18.641 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:18.641 Verification LBA range: start 0x0 length 0x2000 00:16:18.641 TLSTESTn1 : 10.01 5719.96 22.34 0.00 0.00 22342.29 4025.78 17725.93 00:16:18.641 =================================================================================================================== 00:16:18.641 Total : 5719.96 22.34 0.00 0.00 22342.29 4025.78 17725.93 00:16:18.641 0 00:16:18.641 16:08:48 -- fips/fips.sh@1 -- # cleanup 00:16:18.641 16:08:48 -- fips/fips.sh@15 -- # process_shm --id 0 00:16:18.641 16:08:48 -- common/autotest_common.sh@794 -- # type=--id 00:16:18.642 16:08:48 -- common/autotest_common.sh@795 -- # id=0 00:16:18.642 16:08:48 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:16:18.642 16:08:48 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:18.642 16:08:48 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:16:18.642 16:08:48 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:16:18.642 16:08:48 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:16:18.642 16:08:48 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:18.642 nvmf_trace.0 00:16:18.642 16:08:48 -- common/autotest_common.sh@809 -- # return 0 00:16:18.642 16:08:48 -- fips/fips.sh@16 -- # killprocess 83551 00:16:18.642 16:08:48 -- common/autotest_common.sh@936 -- # '[' -z 83551 ']' 00:16:18.642 16:08:48 -- common/autotest_common.sh@940 -- # kill -0 83551 00:16:18.642 16:08:48 -- common/autotest_common.sh@941 -- # uname 00:16:18.898 16:08:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:18.899 16:08:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83551 00:16:18.899 16:08:48 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:16:18.899 16:08:48 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:16:18.899 16:08:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83551' 00:16:18.899 killing process with pid 83551 00:16:18.899 16:08:48 -- common/autotest_common.sh@955 -- # kill 83551 00:16:18.899 Received shutdown signal, test time was about 10.000000 seconds 00:16:18.899 00:16:18.899 Latency(us) 00:16:18.899 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:18.899 =================================================================================================================== 00:16:18.899 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:18.899 [2024-04-15 16:08:48.632933] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:18.899 16:08:48 -- common/autotest_common.sh@960 -- # wait 83551 00:16:18.899 16:08:48 -- fips/fips.sh@17 -- # nvmftestfini 00:16:18.899 16:08:48 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:18.899 16:08:48 -- nvmf/common.sh@117 -- # sync 00:16:19.229 16:08:48 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:19.229 16:08:48 -- nvmf/common.sh@120 -- # set +e 00:16:19.229 16:08:48 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:19.229 16:08:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:19.229 rmmod nvme_tcp 00:16:19.229 rmmod nvme_fabrics 00:16:19.229 16:08:48 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:19.229 16:08:48 -- nvmf/common.sh@124 -- # set -e 00:16:19.229 16:08:48 -- nvmf/common.sh@125 -- # return 0 00:16:19.230 16:08:48 -- nvmf/common.sh@478 -- # '[' -n 83517 ']' 00:16:19.230 16:08:48 -- nvmf/common.sh@479 -- # killprocess 83517 00:16:19.230 16:08:48 -- common/autotest_common.sh@936 -- # '[' -z 83517 ']' 00:16:19.230 16:08:48 -- common/autotest_common.sh@940 -- # kill -0 83517 00:16:19.230 16:08:48 -- common/autotest_common.sh@941 -- # uname 00:16:19.230 16:08:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:19.230 16:08:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83517 00:16:19.230 killing process with pid 83517 00:16:19.230 16:08:48 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:19.230 16:08:48 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:19.230 16:08:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83517' 00:16:19.230 16:08:48 -- common/autotest_common.sh@955 -- # kill 83517 00:16:19.230 [2024-04-15 16:08:48.939682] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:19.230 16:08:48 -- common/autotest_common.sh@960 -- # wait 83517 00:16:19.230 16:08:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:19.230 16:08:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:19.230 16:08:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:19.230 16:08:49 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:19.230 16:08:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:19.230 16:08:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:19.230 16:08:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:19.230 16:08:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:19.505 16:08:49 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:19.505 16:08:49 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:19.505 ************************************ 00:16:19.505 END TEST nvmf_fips 00:16:19.505 ************************************ 00:16:19.505 00:16:19.505 real 0m14.144s 00:16:19.505 user 0m20.396s 00:16:19.505 sys 0m5.210s 00:16:19.505 16:08:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:19.505 16:08:49 -- common/autotest_common.sh@10 -- # set +x 00:16:19.505 16:08:49 -- nvmf/nvmf.sh@64 -- # '[' 1 -eq 1 ']' 00:16:19.505 16:08:49 -- nvmf/nvmf.sh@65 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:16:19.505 16:08:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:19.505 16:08:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:19.505 16:08:49 -- common/autotest_common.sh@10 -- # set +x 00:16:19.505 ************************************ 00:16:19.505 START TEST nvmf_fuzz 00:16:19.505 ************************************ 00:16:19.505 16:08:49 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:16:19.505 * Looking for test storage... 00:16:19.505 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:19.505 16:08:49 -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:19.505 16:08:49 -- nvmf/common.sh@7 -- # uname -s 00:16:19.505 16:08:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:19.505 16:08:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:19.505 16:08:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:19.505 16:08:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:19.505 16:08:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:19.505 16:08:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:19.505 16:08:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:19.505 16:08:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:19.505 16:08:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:19.505 16:08:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:19.505 16:08:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5cf30adc-e0ee-4255-9853-178d7d30983a 00:16:19.505 16:08:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=5cf30adc-e0ee-4255-9853-178d7d30983a 00:16:19.505 16:08:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:19.505 16:08:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:19.505 16:08:49 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:19.505 16:08:49 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:19.764 16:08:49 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:19.764 16:08:49 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:19.764 16:08:49 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:19.764 16:08:49 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:19.764 16:08:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.764 16:08:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.764 16:08:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.764 16:08:49 -- paths/export.sh@5 -- # export PATH 00:16:19.764 16:08:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.764 16:08:49 -- nvmf/common.sh@47 -- # : 0 00:16:19.764 16:08:49 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:19.764 16:08:49 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:19.764 16:08:49 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:19.764 16:08:49 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:19.764 16:08:49 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:19.764 16:08:49 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:19.764 16:08:49 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:19.764 16:08:49 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:19.764 16:08:49 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:16:19.764 16:08:49 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:19.764 16:08:49 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:19.764 16:08:49 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:19.764 16:08:49 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:19.764 16:08:49 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:19.764 16:08:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:19.764 16:08:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:19.764 16:08:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:19.764 16:08:49 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:16:19.764 16:08:49 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:16:19.764 16:08:49 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:16:19.764 16:08:49 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:16:19.764 16:08:49 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:16:19.764 16:08:49 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:16:19.764 16:08:49 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:19.764 16:08:49 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:19.764 16:08:49 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:19.764 16:08:49 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:19.764 16:08:49 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:19.764 16:08:49 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:19.764 16:08:49 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:19.764 16:08:49 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:19.764 16:08:49 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:19.764 16:08:49 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:19.764 16:08:49 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:19.764 16:08:49 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:19.764 16:08:49 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:19.764 16:08:49 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:19.764 Cannot find device "nvmf_tgt_br" 00:16:19.764 16:08:49 -- nvmf/common.sh@155 -- # true 00:16:19.764 16:08:49 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:19.764 Cannot find device "nvmf_tgt_br2" 00:16:19.764 16:08:49 -- nvmf/common.sh@156 -- # true 00:16:19.764 16:08:49 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:19.764 16:08:49 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:19.764 Cannot find device "nvmf_tgt_br" 00:16:19.764 16:08:49 -- nvmf/common.sh@158 -- # true 00:16:19.764 16:08:49 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:19.764 Cannot find device "nvmf_tgt_br2" 00:16:19.764 16:08:49 -- nvmf/common.sh@159 -- # true 00:16:19.764 16:08:49 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:19.764 16:08:49 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:19.764 16:08:49 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:19.764 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:19.764 16:08:49 -- nvmf/common.sh@162 -- # true 00:16:19.764 16:08:49 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:19.764 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:19.764 16:08:49 -- nvmf/common.sh@163 -- # true 00:16:19.764 16:08:49 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:19.764 16:08:49 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:19.764 16:08:49 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:19.764 16:08:49 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:19.764 16:08:49 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:20.023 16:08:49 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:20.023 16:08:49 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:20.023 16:08:49 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:20.023 16:08:49 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:20.023 16:08:49 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:20.023 16:08:49 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:20.023 16:08:49 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:20.023 16:08:49 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:20.023 16:08:49 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:20.023 16:08:49 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:20.023 16:08:49 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:20.023 16:08:49 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:20.023 16:08:49 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:20.023 16:08:49 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:20.023 16:08:49 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:20.023 16:08:49 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:20.023 16:08:49 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:20.023 16:08:49 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:20.023 16:08:49 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:20.023 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:20.023 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:16:20.023 00:16:20.023 --- 10.0.0.2 ping statistics --- 00:16:20.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.023 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:16:20.023 16:08:49 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:20.023 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:20.023 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:16:20.023 00:16:20.023 --- 10.0.0.3 ping statistics --- 00:16:20.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.023 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:16:20.023 16:08:49 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:20.023 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:20.023 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:16:20.023 00:16:20.023 --- 10.0.0.1 ping statistics --- 00:16:20.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.023 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:16:20.023 16:08:49 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:20.023 16:08:49 -- nvmf/common.sh@422 -- # return 0 00:16:20.023 16:08:49 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:20.023 16:08:49 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:20.023 16:08:49 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:20.023 16:08:49 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:20.023 16:08:49 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:20.023 16:08:49 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:20.023 16:08:49 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:20.023 16:08:49 -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:20.023 16:08:49 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=83882 00:16:20.023 16:08:49 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:20.023 16:08:49 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 83882 00:16:20.023 16:08:49 -- common/autotest_common.sh@817 -- # '[' -z 83882 ']' 00:16:20.023 16:08:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.023 16:08:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:20.023 16:08:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.023 16:08:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:20.023 16:08:49 -- common/autotest_common.sh@10 -- # set +x 00:16:21.398 16:08:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:21.398 16:08:50 -- common/autotest_common.sh@850 -- # return 0 00:16:21.398 16:08:50 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:21.398 16:08:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:21.398 16:08:50 -- common/autotest_common.sh@10 -- # set +x 00:16:21.398 16:08:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:21.398 16:08:50 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:16:21.398 16:08:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:21.398 16:08:50 -- common/autotest_common.sh@10 -- # set +x 00:16:21.398 Malloc0 00:16:21.398 16:08:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:21.398 16:08:51 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:21.398 16:08:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:21.398 16:08:51 -- common/autotest_common.sh@10 -- # set +x 00:16:21.398 16:08:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:21.398 16:08:51 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:21.398 16:08:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:21.398 16:08:51 -- common/autotest_common.sh@10 -- # set +x 00:16:21.398 16:08:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:21.398 16:08:51 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:21.398 16:08:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:21.398 16:08:51 -- common/autotest_common.sh@10 -- # set +x 00:16:21.398 16:08:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:21.398 16:08:51 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:16:21.398 16:08:51 -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:16:21.398 Shutting down the fuzz application 00:16:21.398 16:08:51 -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:16:21.965 Shutting down the fuzz application 00:16:21.965 16:08:51 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:21.965 16:08:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:21.965 16:08:51 -- common/autotest_common.sh@10 -- # set +x 00:16:21.965 16:08:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:21.965 16:08:51 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:16:21.965 16:08:51 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:16:21.965 16:08:51 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:21.965 16:08:51 -- nvmf/common.sh@117 -- # sync 00:16:21.965 16:08:51 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:21.965 16:08:51 -- nvmf/common.sh@120 -- # set +e 00:16:21.965 16:08:51 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:21.965 16:08:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:21.965 rmmod nvme_tcp 00:16:21.965 rmmod nvme_fabrics 00:16:21.965 16:08:51 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:21.965 16:08:51 -- nvmf/common.sh@124 -- # set -e 00:16:21.965 16:08:51 -- nvmf/common.sh@125 -- # return 0 00:16:21.965 16:08:51 -- nvmf/common.sh@478 -- # '[' -n 83882 ']' 00:16:21.965 16:08:51 -- nvmf/common.sh@479 -- # killprocess 83882 00:16:21.965 16:08:51 -- common/autotest_common.sh@936 -- # '[' -z 83882 ']' 00:16:21.965 16:08:51 -- common/autotest_common.sh@940 -- # kill -0 83882 00:16:21.965 16:08:51 -- common/autotest_common.sh@941 -- # uname 00:16:21.965 16:08:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:21.965 16:08:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83882 00:16:22.223 16:08:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:22.223 16:08:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:22.223 16:08:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83882' 00:16:22.223 killing process with pid 83882 00:16:22.223 16:08:51 -- common/autotest_common.sh@955 -- # kill 83882 00:16:22.223 16:08:51 -- common/autotest_common.sh@960 -- # wait 83882 00:16:22.223 16:08:52 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:22.223 16:08:52 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:22.223 16:08:52 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:22.223 16:08:52 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:22.223 16:08:52 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:22.223 16:08:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:22.223 16:08:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:22.223 16:08:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:22.223 16:08:52 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:22.223 16:08:52 -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:16:22.480 00:16:22.480 real 0m2.891s 00:16:22.480 user 0m2.809s 00:16:22.480 sys 0m0.711s 00:16:22.480 16:08:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:22.480 ************************************ 00:16:22.480 END TEST nvmf_fuzz 00:16:22.480 ************************************ 00:16:22.480 16:08:52 -- common/autotest_common.sh@10 -- # set +x 00:16:22.480 16:08:52 -- nvmf/nvmf.sh@66 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:16:22.480 16:08:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:22.480 16:08:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:22.480 16:08:52 -- common/autotest_common.sh@10 -- # set +x 00:16:22.480 ************************************ 00:16:22.480 START TEST nvmf_multiconnection 00:16:22.480 ************************************ 00:16:22.480 16:08:52 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:16:22.480 * Looking for test storage... 00:16:22.480 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:22.480 16:08:52 -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:22.480 16:08:52 -- nvmf/common.sh@7 -- # uname -s 00:16:22.480 16:08:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:22.480 16:08:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:22.480 16:08:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:22.480 16:08:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:22.480 16:08:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:22.480 16:08:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:22.480 16:08:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:22.480 16:08:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:22.480 16:08:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:22.480 16:08:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:22.480 16:08:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5cf30adc-e0ee-4255-9853-178d7d30983a 00:16:22.480 16:08:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=5cf30adc-e0ee-4255-9853-178d7d30983a 00:16:22.480 16:08:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:22.480 16:08:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:22.480 16:08:52 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:22.480 16:08:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:22.480 16:08:52 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:22.480 16:08:52 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:22.480 16:08:52 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:22.480 16:08:52 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:22.480 16:08:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.480 16:08:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.480 16:08:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.480 16:08:52 -- paths/export.sh@5 -- # export PATH 00:16:22.480 16:08:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.480 16:08:52 -- nvmf/common.sh@47 -- # : 0 00:16:22.480 16:08:52 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:22.480 16:08:52 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:22.480 16:08:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:22.480 16:08:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:22.480 16:08:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:22.480 16:08:52 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:22.480 16:08:52 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:22.480 16:08:52 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:22.480 16:08:52 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:22.480 16:08:52 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:22.480 16:08:52 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:16:22.480 16:08:52 -- target/multiconnection.sh@16 -- # nvmftestinit 00:16:22.480 16:08:52 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:22.480 16:08:52 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:22.480 16:08:52 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:22.480 16:08:52 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:22.480 16:08:52 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:22.738 16:08:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:22.738 16:08:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:22.738 16:08:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:22.738 16:08:52 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:16:22.738 16:08:52 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:16:22.738 16:08:52 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:16:22.738 16:08:52 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:16:22.738 16:08:52 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:16:22.738 16:08:52 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:16:22.738 16:08:52 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:22.738 16:08:52 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:22.738 16:08:52 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:22.738 16:08:52 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:22.738 16:08:52 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:22.738 16:08:52 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:22.738 16:08:52 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:22.738 16:08:52 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:22.738 16:08:52 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:22.738 16:08:52 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:22.738 16:08:52 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:22.738 16:08:52 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:22.738 16:08:52 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:22.738 16:08:52 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:22.738 Cannot find device "nvmf_tgt_br" 00:16:22.738 16:08:52 -- nvmf/common.sh@155 -- # true 00:16:22.738 16:08:52 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:22.738 Cannot find device "nvmf_tgt_br2" 00:16:22.738 16:08:52 -- nvmf/common.sh@156 -- # true 00:16:22.738 16:08:52 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:22.738 16:08:52 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:22.738 Cannot find device "nvmf_tgt_br" 00:16:22.738 16:08:52 -- nvmf/common.sh@158 -- # true 00:16:22.738 16:08:52 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:22.738 Cannot find device "nvmf_tgt_br2" 00:16:22.738 16:08:52 -- nvmf/common.sh@159 -- # true 00:16:22.738 16:08:52 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:22.738 16:08:52 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:22.738 16:08:52 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:22.738 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:22.738 16:08:52 -- nvmf/common.sh@162 -- # true 00:16:22.738 16:08:52 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:22.738 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:22.738 16:08:52 -- nvmf/common.sh@163 -- # true 00:16:22.738 16:08:52 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:22.738 16:08:52 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:22.738 16:08:52 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:22.738 16:08:52 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:22.738 16:08:52 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:22.738 16:08:52 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:22.738 16:08:52 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:22.738 16:08:52 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:22.738 16:08:52 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:22.738 16:08:52 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:22.738 16:08:52 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:22.738 16:08:52 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:22.738 16:08:52 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:22.738 16:08:52 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:22.738 16:08:52 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:22.995 16:08:52 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:22.995 16:08:52 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:22.995 16:08:52 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:22.995 16:08:52 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:22.995 16:08:52 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:22.995 16:08:52 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:22.995 16:08:52 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:22.996 16:08:52 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:22.996 16:08:52 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:22.996 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:22.996 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:16:22.996 00:16:22.996 --- 10.0.0.2 ping statistics --- 00:16:22.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.996 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:16:22.996 16:08:52 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:22.996 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:22.996 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:16:22.996 00:16:22.996 --- 10.0.0.3 ping statistics --- 00:16:22.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.996 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:16:22.996 16:08:52 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:22.996 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:22.996 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:16:22.996 00:16:22.996 --- 10.0.0.1 ping statistics --- 00:16:22.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.996 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:16:22.996 16:08:52 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:22.996 16:08:52 -- nvmf/common.sh@422 -- # return 0 00:16:22.996 16:08:52 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:22.996 16:08:52 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:22.996 16:08:52 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:22.996 16:08:52 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:22.996 16:08:52 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:22.996 16:08:52 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:22.996 16:08:52 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:22.996 16:08:52 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:16:22.996 16:08:52 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:22.996 16:08:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:22.996 16:08:52 -- common/autotest_common.sh@10 -- # set +x 00:16:22.996 16:08:52 -- nvmf/common.sh@470 -- # nvmfpid=84085 00:16:22.996 16:08:52 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:22.996 16:08:52 -- nvmf/common.sh@471 -- # waitforlisten 84085 00:16:22.996 16:08:52 -- common/autotest_common.sh@817 -- # '[' -z 84085 ']' 00:16:22.996 16:08:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:22.996 16:08:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:22.996 16:08:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:22.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:22.996 16:08:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:22.996 16:08:52 -- common/autotest_common.sh@10 -- # set +x 00:16:22.996 [2024-04-15 16:08:52.933727] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:16:22.996 [2024-04-15 16:08:52.934048] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:23.254 [2024-04-15 16:08:53.088444] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:23.254 [2024-04-15 16:08:53.145928] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:23.254 [2024-04-15 16:08:53.146186] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:23.254 [2024-04-15 16:08:53.146328] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:23.254 [2024-04-15 16:08:53.146401] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:23.254 [2024-04-15 16:08:53.146495] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:23.254 [2024-04-15 16:08:53.146779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:23.254 [2024-04-15 16:08:53.146858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:23.254 [2024-04-15 16:08:53.147763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:23.254 [2024-04-15 16:08:53.147766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.186 16:08:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:24.186 16:08:53 -- common/autotest_common.sh@850 -- # return 0 00:16:24.186 16:08:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:24.186 16:08:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:24.186 16:08:53 -- common/autotest_common.sh@10 -- # set +x 00:16:24.186 16:08:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:24.186 16:08:53 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:24.186 16:08:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.186 16:08:53 -- common/autotest_common.sh@10 -- # set +x 00:16:24.186 [2024-04-15 16:08:53.931422] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:24.186 16:08:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.186 16:08:53 -- target/multiconnection.sh@21 -- # seq 1 11 00:16:24.186 16:08:53 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:24.187 16:08:53 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:24.187 16:08:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.187 16:08:53 -- common/autotest_common.sh@10 -- # set +x 00:16:24.187 Malloc1 00:16:24.187 16:08:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.187 16:08:53 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:16:24.187 16:08:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.187 16:08:53 -- common/autotest_common.sh@10 -- # set +x 00:16:24.187 16:08:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.187 16:08:53 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:24.187 16:08:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.187 16:08:53 -- common/autotest_common.sh@10 -- # set +x 00:16:24.187 16:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.187 16:08:54 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:24.187 16:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.187 16:08:54 -- common/autotest_common.sh@10 -- # set +x 00:16:24.187 [2024-04-15 16:08:54.007556] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:24.187 16:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.187 16:08:54 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:24.187 16:08:54 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:16:24.187 16:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.187 16:08:54 -- common/autotest_common.sh@10 -- # set +x 00:16:24.187 Malloc2 00:16:24.187 16:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.187 16:08:54 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:24.187 16:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.187 16:08:54 -- common/autotest_common.sh@10 -- # set +x 00:16:24.187 16:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.187 16:08:54 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:16:24.187 16:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.187 16:08:54 -- common/autotest_common.sh@10 -- # set +x 00:16:24.187 16:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.187 16:08:54 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:24.187 16:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.187 16:08:54 -- common/autotest_common.sh@10 -- # set +x 00:16:24.187 16:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.187 16:08:54 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:24.187 16:08:54 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:16:24.187 16:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.187 16:08:54 -- common/autotest_common.sh@10 -- # set +x 00:16:24.187 Malloc3 00:16:24.187 16:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.187 16:08:54 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:16:24.187 16:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.187 16:08:54 -- common/autotest_common.sh@10 -- # set +x 00:16:24.187 16:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.187 16:08:54 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:16:24.187 16:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.187 16:08:54 -- common/autotest_common.sh@10 -- # set +x 00:16:24.187 16:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.187 16:08:54 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:16:24.187 16:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.187 16:08:54 -- common/autotest_common.sh@10 -- # set +x 00:16:24.187 16:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.187 16:08:54 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:24.187 16:08:54 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:16:24.187 16:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.187 16:08:54 -- common/autotest_common.sh@10 -- # set +x 00:16:24.187 Malloc4 00:16:24.187 16:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.187 16:08:54 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:16:24.187 16:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.187 16:08:54 -- common/autotest_common.sh@10 -- # set +x 00:16:24.187 16:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.187 16:08:54 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:16:24.187 16:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.187 16:08:54 -- common/autotest_common.sh@10 -- # set +x 00:16:24.187 16:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.187 16:08:54 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:16:24.187 16:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.187 16:08:54 -- common/autotest_common.sh@10 -- # set +x 00:16:24.187 16:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.187 16:08:54 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:24.187 16:08:54 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:16:24.187 16:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.187 16:08:54 -- common/autotest_common.sh@10 -- # set +x 00:16:24.445 Malloc5 00:16:24.445 16:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.445 16:08:54 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:16:24.445 16:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.445 16:08:54 -- common/autotest_common.sh@10 -- # set +x 00:16:24.445 16:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.445 16:08:54 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:16:24.445 16:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.445 16:08:54 -- common/autotest_common.sh@10 -- # set +x 00:16:24.445 16:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.445 16:08:54 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:16:24.445 16:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.445 16:08:54 -- common/autotest_common.sh@10 -- # set +x 00:16:24.445 16:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.446 16:08:54 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:24.446 16:08:54 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:16:24.446 16:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.446 16:08:54 -- common/autotest_common.sh@10 -- # set +x 00:16:24.446 Malloc6 00:16:24.446 16:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.446 16:08:54 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:16:24.446 16:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.446 16:08:54 -- common/autotest_common.sh@10 -- # set +x 00:16:24.446 16:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.446 16:08:54 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:16:24.446 16:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.446 16:08:54 -- common/autotest_common.sh@10 -- # set +x 00:16:24.446 16:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.446 16:08:54 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:16:24.446 16:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.446 16:08:54 -- common/autotest_common.sh@10 -- # set +x 00:16:24.446 16:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.446 16:08:54 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:24.446 16:08:54 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:16:24.446 16:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.446 16:08:54 -- common/autotest_common.sh@10 -- # set +x 00:16:24.446 Malloc7 00:16:24.446 16:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.446 16:08:54 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:16:24.446 16:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.446 16:08:54 -- common/autotest_common.sh@10 -- # set +x 00:16:24.446 16:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.446 16:08:54 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:16:24.446 16:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.446 16:08:54 -- common/autotest_common.sh@10 -- # set +x 00:16:24.446 16:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.446 16:08:54 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:16:24.446 16:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.446 16:08:54 -- common/autotest_common.sh@10 -- # set +x 00:16:24.446 16:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.446 16:08:54 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:24.446 16:08:54 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:16:24.446 16:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.446 16:08:54 -- common/autotest_common.sh@10 -- # set +x 00:16:24.446 Malloc8 00:16:24.446 16:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.446 16:08:54 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:16:24.446 16:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.446 16:08:54 -- common/autotest_common.sh@10 -- # set +x 00:16:24.446 16:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.446 16:08:54 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:16:24.446 16:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.446 16:08:54 -- common/autotest_common.sh@10 -- # set +x 00:16:24.446 16:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.446 16:08:54 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:16:24.446 16:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.446 16:08:54 -- common/autotest_common.sh@10 -- # set +x 00:16:24.446 16:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.446 16:08:54 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:24.446 16:08:54 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:16:24.446 16:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.446 16:08:54 -- common/autotest_common.sh@10 -- # set +x 00:16:24.446 Malloc9 00:16:24.446 16:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.446 16:08:54 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:16:24.446 16:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.446 16:08:54 -- common/autotest_common.sh@10 -- # set +x 00:16:24.446 16:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.446 16:08:54 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:16:24.446 16:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.446 16:08:54 -- common/autotest_common.sh@10 -- # set +x 00:16:24.446 16:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.446 16:08:54 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:16:24.446 16:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.446 16:08:54 -- common/autotest_common.sh@10 -- # set +x 00:16:24.446 16:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.446 16:08:54 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:24.446 16:08:54 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:16:24.446 16:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.446 16:08:54 -- common/autotest_common.sh@10 -- # set +x 00:16:24.446 Malloc10 00:16:24.446 16:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.446 16:08:54 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:16:24.446 16:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.446 16:08:54 -- common/autotest_common.sh@10 -- # set +x 00:16:24.446 16:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.446 16:08:54 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:16:24.446 16:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.446 16:08:54 -- common/autotest_common.sh@10 -- # set +x 00:16:24.446 16:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.446 16:08:54 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:16:24.446 16:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.446 16:08:54 -- common/autotest_common.sh@10 -- # set +x 00:16:24.446 16:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.446 16:08:54 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:24.704 16:08:54 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:16:24.704 16:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.704 16:08:54 -- common/autotest_common.sh@10 -- # set +x 00:16:24.704 Malloc11 00:16:24.704 16:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.704 16:08:54 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:16:24.704 16:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.704 16:08:54 -- common/autotest_common.sh@10 -- # set +x 00:16:24.704 16:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.704 16:08:54 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:16:24.704 16:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.704 16:08:54 -- common/autotest_common.sh@10 -- # set +x 00:16:24.704 16:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.704 16:08:54 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:16:24.704 16:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.704 16:08:54 -- common/autotest_common.sh@10 -- # set +x 00:16:24.704 16:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.704 16:08:54 -- target/multiconnection.sh@28 -- # seq 1 11 00:16:24.704 16:08:54 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:24.704 16:08:54 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5cf30adc-e0ee-4255-9853-178d7d30983a --hostid=5cf30adc-e0ee-4255-9853-178d7d30983a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:24.704 16:08:54 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:16:24.704 16:08:54 -- common/autotest_common.sh@1184 -- # local i=0 00:16:24.704 16:08:54 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:16:24.704 16:08:54 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:16:24.704 16:08:54 -- common/autotest_common.sh@1191 -- # sleep 2 00:16:26.638 16:08:56 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:16:26.638 16:08:56 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:16:26.896 16:08:56 -- common/autotest_common.sh@1193 -- # grep -c SPDK1 00:16:26.896 16:08:56 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:16:26.896 16:08:56 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:16:26.896 16:08:56 -- common/autotest_common.sh@1194 -- # return 0 00:16:26.896 16:08:56 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:26.896 16:08:56 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5cf30adc-e0ee-4255-9853-178d7d30983a --hostid=5cf30adc-e0ee-4255-9853-178d7d30983a -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:16:26.896 16:08:56 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:16:26.896 16:08:56 -- common/autotest_common.sh@1184 -- # local i=0 00:16:26.896 16:08:56 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:16:26.896 16:08:56 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:16:26.896 16:08:56 -- common/autotest_common.sh@1191 -- # sleep 2 00:16:28.796 16:08:58 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:16:29.054 16:08:58 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:16:29.054 16:08:58 -- common/autotest_common.sh@1193 -- # grep -c SPDK2 00:16:29.054 16:08:58 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:16:29.054 16:08:58 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:16:29.054 16:08:58 -- common/autotest_common.sh@1194 -- # return 0 00:16:29.054 16:08:58 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:29.054 16:08:58 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5cf30adc-e0ee-4255-9853-178d7d30983a --hostid=5cf30adc-e0ee-4255-9853-178d7d30983a -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:16:29.054 16:08:58 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:16:29.054 16:08:58 -- common/autotest_common.sh@1184 -- # local i=0 00:16:29.054 16:08:58 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:16:29.054 16:08:58 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:16:29.054 16:08:58 -- common/autotest_common.sh@1191 -- # sleep 2 00:16:31.583 16:09:00 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:16:31.583 16:09:00 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:16:31.583 16:09:00 -- common/autotest_common.sh@1193 -- # grep -c SPDK3 00:16:31.583 16:09:00 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:16:31.583 16:09:00 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:16:31.583 16:09:00 -- common/autotest_common.sh@1194 -- # return 0 00:16:31.583 16:09:00 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:31.583 16:09:00 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5cf30adc-e0ee-4255-9853-178d7d30983a --hostid=5cf30adc-e0ee-4255-9853-178d7d30983a -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:16:31.583 16:09:01 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:16:31.583 16:09:01 -- common/autotest_common.sh@1184 -- # local i=0 00:16:31.583 16:09:01 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:16:31.583 16:09:01 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:16:31.583 16:09:01 -- common/autotest_common.sh@1191 -- # sleep 2 00:16:33.482 16:09:03 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:16:33.482 16:09:03 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:16:33.482 16:09:03 -- common/autotest_common.sh@1193 -- # grep -c SPDK4 00:16:33.482 16:09:03 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:16:33.482 16:09:03 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:16:33.482 16:09:03 -- common/autotest_common.sh@1194 -- # return 0 00:16:33.483 16:09:03 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:33.483 16:09:03 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5cf30adc-e0ee-4255-9853-178d7d30983a --hostid=5cf30adc-e0ee-4255-9853-178d7d30983a -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:16:33.483 16:09:03 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:16:33.483 16:09:03 -- common/autotest_common.sh@1184 -- # local i=0 00:16:33.483 16:09:03 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:16:33.483 16:09:03 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:16:33.483 16:09:03 -- common/autotest_common.sh@1191 -- # sleep 2 00:16:35.385 16:09:05 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:16:35.385 16:09:05 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:16:35.385 16:09:05 -- common/autotest_common.sh@1193 -- # grep -c SPDK5 00:16:35.385 16:09:05 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:16:35.385 16:09:05 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:16:35.385 16:09:05 -- common/autotest_common.sh@1194 -- # return 0 00:16:35.385 16:09:05 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:35.385 16:09:05 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5cf30adc-e0ee-4255-9853-178d7d30983a --hostid=5cf30adc-e0ee-4255-9853-178d7d30983a -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:16:35.643 16:09:05 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:16:35.643 16:09:05 -- common/autotest_common.sh@1184 -- # local i=0 00:16:35.643 16:09:05 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:16:35.643 16:09:05 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:16:35.643 16:09:05 -- common/autotest_common.sh@1191 -- # sleep 2 00:16:37.545 16:09:07 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:16:37.545 16:09:07 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:16:37.545 16:09:07 -- common/autotest_common.sh@1193 -- # grep -c SPDK6 00:16:37.545 16:09:07 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:16:37.545 16:09:07 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:16:37.545 16:09:07 -- common/autotest_common.sh@1194 -- # return 0 00:16:37.545 16:09:07 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:37.545 16:09:07 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5cf30adc-e0ee-4255-9853-178d7d30983a --hostid=5cf30adc-e0ee-4255-9853-178d7d30983a -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:16:37.803 16:09:07 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:16:37.803 16:09:07 -- common/autotest_common.sh@1184 -- # local i=0 00:16:37.803 16:09:07 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:16:37.803 16:09:07 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:16:37.803 16:09:07 -- common/autotest_common.sh@1191 -- # sleep 2 00:16:39.704 16:09:09 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:16:39.704 16:09:09 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:16:39.704 16:09:09 -- common/autotest_common.sh@1193 -- # grep -c SPDK7 00:16:39.704 16:09:09 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:16:39.704 16:09:09 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:16:39.704 16:09:09 -- common/autotest_common.sh@1194 -- # return 0 00:16:39.704 16:09:09 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:39.704 16:09:09 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5cf30adc-e0ee-4255-9853-178d7d30983a --hostid=5cf30adc-e0ee-4255-9853-178d7d30983a -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:16:39.962 16:09:09 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:16:39.962 16:09:09 -- common/autotest_common.sh@1184 -- # local i=0 00:16:39.962 16:09:09 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:16:39.962 16:09:09 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:16:39.962 16:09:09 -- common/autotest_common.sh@1191 -- # sleep 2 00:16:41.889 16:09:11 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:16:41.889 16:09:11 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:16:41.889 16:09:11 -- common/autotest_common.sh@1193 -- # grep -c SPDK8 00:16:41.889 16:09:11 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:16:41.889 16:09:11 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:16:41.889 16:09:11 -- common/autotest_common.sh@1194 -- # return 0 00:16:41.889 16:09:11 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:41.889 16:09:11 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5cf30adc-e0ee-4255-9853-178d7d30983a --hostid=5cf30adc-e0ee-4255-9853-178d7d30983a -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:16:42.147 16:09:11 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:16:42.147 16:09:11 -- common/autotest_common.sh@1184 -- # local i=0 00:16:42.147 16:09:11 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:16:42.147 16:09:11 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:16:42.147 16:09:11 -- common/autotest_common.sh@1191 -- # sleep 2 00:16:44.045 16:09:13 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:16:44.045 16:09:13 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:16:44.045 16:09:13 -- common/autotest_common.sh@1193 -- # grep -c SPDK9 00:16:44.045 16:09:13 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:16:44.045 16:09:13 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:16:44.045 16:09:13 -- common/autotest_common.sh@1194 -- # return 0 00:16:44.045 16:09:13 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:44.045 16:09:13 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5cf30adc-e0ee-4255-9853-178d7d30983a --hostid=5cf30adc-e0ee-4255-9853-178d7d30983a -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:16:44.303 16:09:14 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:16:44.303 16:09:14 -- common/autotest_common.sh@1184 -- # local i=0 00:16:44.303 16:09:14 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:16:44.303 16:09:14 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:16:44.303 16:09:14 -- common/autotest_common.sh@1191 -- # sleep 2 00:16:46.206 16:09:16 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:16:46.206 16:09:16 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:16:46.206 16:09:16 -- common/autotest_common.sh@1193 -- # grep -c SPDK10 00:16:46.206 16:09:16 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:16:46.206 16:09:16 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:16:46.206 16:09:16 -- common/autotest_common.sh@1194 -- # return 0 00:16:46.206 16:09:16 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:46.206 16:09:16 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5cf30adc-e0ee-4255-9853-178d7d30983a --hostid=5cf30adc-e0ee-4255-9853-178d7d30983a -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:16:46.464 16:09:16 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:16:46.464 16:09:16 -- common/autotest_common.sh@1184 -- # local i=0 00:16:46.464 16:09:16 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:16:46.464 16:09:16 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:16:46.464 16:09:16 -- common/autotest_common.sh@1191 -- # sleep 2 00:16:48.369 16:09:18 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:16:48.369 16:09:18 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:16:48.369 16:09:18 -- common/autotest_common.sh@1193 -- # grep -c SPDK11 00:16:48.369 16:09:18 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:16:48.369 16:09:18 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:16:48.369 16:09:18 -- common/autotest_common.sh@1194 -- # return 0 00:16:48.369 16:09:18 -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:16:48.369 [global] 00:16:48.369 thread=1 00:16:48.369 invalidate=1 00:16:48.369 rw=read 00:16:48.369 time_based=1 00:16:48.369 runtime=10 00:16:48.369 ioengine=libaio 00:16:48.369 direct=1 00:16:48.369 bs=262144 00:16:48.369 iodepth=64 00:16:48.369 norandommap=1 00:16:48.369 numjobs=1 00:16:48.369 00:16:48.369 [job0] 00:16:48.369 filename=/dev/nvme0n1 00:16:48.369 [job1] 00:16:48.369 filename=/dev/nvme10n1 00:16:48.369 [job2] 00:16:48.369 filename=/dev/nvme1n1 00:16:48.369 [job3] 00:16:48.369 filename=/dev/nvme2n1 00:16:48.627 [job4] 00:16:48.627 filename=/dev/nvme3n1 00:16:48.627 [job5] 00:16:48.627 filename=/dev/nvme4n1 00:16:48.627 [job6] 00:16:48.627 filename=/dev/nvme5n1 00:16:48.627 [job7] 00:16:48.627 filename=/dev/nvme6n1 00:16:48.627 [job8] 00:16:48.627 filename=/dev/nvme7n1 00:16:48.627 [job9] 00:16:48.627 filename=/dev/nvme8n1 00:16:48.627 [job10] 00:16:48.627 filename=/dev/nvme9n1 00:16:48.627 Could not set queue depth (nvme0n1) 00:16:48.627 Could not set queue depth (nvme10n1) 00:16:48.627 Could not set queue depth (nvme1n1) 00:16:48.627 Could not set queue depth (nvme2n1) 00:16:48.627 Could not set queue depth (nvme3n1) 00:16:48.627 Could not set queue depth (nvme4n1) 00:16:48.627 Could not set queue depth (nvme5n1) 00:16:48.627 Could not set queue depth (nvme6n1) 00:16:48.627 Could not set queue depth (nvme7n1) 00:16:48.627 Could not set queue depth (nvme8n1) 00:16:48.627 Could not set queue depth (nvme9n1) 00:16:48.885 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:48.885 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:48.885 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:48.885 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:48.885 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:48.885 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:48.885 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:48.885 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:48.885 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:48.885 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:48.885 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:48.885 fio-3.35 00:16:48.885 Starting 11 threads 00:17:01.101 00:17:01.101 job0: (groupid=0, jobs=1): err= 0: pid=84543: Mon Apr 15 16:09:29 2024 00:17:01.101 read: IOPS=514, BW=129MiB/s (135MB/s)(1298MiB/10097msec) 00:17:01.101 slat (usec): min=18, max=76021, avg=1920.19, stdev=4310.46 00:17:01.101 clat (msec): min=11, max=224, avg=122.36, stdev=13.42 00:17:01.101 lat (msec): min=11, max=225, avg=124.28, stdev=13.76 00:17:01.101 clat percentiles (msec): 00:17:01.101 | 1.00th=[ 43], 5.00th=[ 111], 10.00th=[ 116], 20.00th=[ 118], 00:17:01.101 | 30.00th=[ 121], 40.00th=[ 123], 50.00th=[ 124], 60.00th=[ 125], 00:17:01.101 | 70.00th=[ 127], 80.00th=[ 128], 90.00th=[ 132], 95.00th=[ 136], 00:17:01.101 | 99.00th=[ 150], 99.50th=[ 159], 99.90th=[ 203], 99.95th=[ 203], 00:17:01.101 | 99.99th=[ 226] 00:17:01.101 bw ( KiB/s): min=124416, max=142621, per=6.39%, avg=131315.40, stdev=4449.86, samples=20 00:17:01.101 iops : min= 486, max= 557, avg=512.85, stdev=17.31, samples=20 00:17:01.101 lat (msec) : 20=0.17%, 50=1.02%, 100=1.42%, 250=97.38% 00:17:01.101 cpu : usr=0.30%, sys=2.29%, ctx=1481, majf=0, minf=4097 00:17:01.101 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:17:01.101 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:01.101 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:01.101 issued rwts: total=5193,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:01.101 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:01.101 job1: (groupid=0, jobs=1): err= 0: pid=84544: Mon Apr 15 16:09:29 2024 00:17:01.101 read: IOPS=676, BW=169MiB/s (177MB/s)(1705MiB/10090msec) 00:17:01.101 slat (usec): min=19, max=62774, avg=1461.07, stdev=3135.26 00:17:01.101 clat (msec): min=11, max=191, avg=93.13, stdev= 9.01 00:17:01.101 lat (msec): min=11, max=191, avg=94.59, stdev= 9.12 00:17:01.101 clat percentiles (msec): 00:17:01.101 | 1.00th=[ 75], 5.00th=[ 83], 10.00th=[ 86], 20.00th=[ 88], 00:17:01.101 | 30.00th=[ 90], 40.00th=[ 91], 50.00th=[ 92], 60.00th=[ 93], 00:17:01.101 | 70.00th=[ 95], 80.00th=[ 99], 90.00th=[ 104], 95.00th=[ 109], 00:17:01.101 | 99.00th=[ 118], 99.50th=[ 127], 99.90th=[ 174], 99.95th=[ 178], 00:17:01.101 | 99.99th=[ 192] 00:17:01.101 bw ( KiB/s): min=147161, max=182272, per=8.42%, avg=172954.10, stdev=8339.04, samples=20 00:17:01.101 iops : min= 574, max= 712, avg=675.50, stdev=32.66, samples=20 00:17:01.101 lat (msec) : 20=0.04%, 50=0.07%, 100=84.59%, 250=15.29% 00:17:01.101 cpu : usr=0.35%, sys=2.75%, ctx=1779, majf=0, minf=4097 00:17:01.101 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:17:01.101 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:01.101 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:01.101 issued rwts: total=6821,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:01.101 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:01.101 job2: (groupid=0, jobs=1): err= 0: pid=84545: Mon Apr 15 16:09:29 2024 00:17:01.101 read: IOPS=968, BW=242MiB/s (254MB/s)(2429MiB/10037msec) 00:17:01.101 slat (usec): min=19, max=60715, avg=1024.12, stdev=2299.64 00:17:01.101 clat (msec): min=35, max=113, avg=65.02, stdev= 8.99 00:17:01.101 lat (msec): min=37, max=128, avg=66.05, stdev= 9.06 00:17:01.101 clat percentiles (msec): 00:17:01.101 | 1.00th=[ 50], 5.00th=[ 54], 10.00th=[ 56], 20.00th=[ 59], 00:17:01.101 | 30.00th=[ 61], 40.00th=[ 63], 50.00th=[ 64], 60.00th=[ 66], 00:17:01.101 | 70.00th=[ 68], 80.00th=[ 70], 90.00th=[ 75], 95.00th=[ 82], 00:17:01.101 | 99.00th=[ 100], 99.50th=[ 105], 99.90th=[ 111], 99.95th=[ 114], 00:17:01.101 | 99.99th=[ 114] 00:17:01.101 bw ( KiB/s): min=162116, max=272862, per=12.03%, avg=247084.20, stdev=24345.16, samples=20 00:17:01.101 iops : min= 633, max= 1065, avg=965.10, stdev=95.12, samples=20 00:17:01.101 lat (msec) : 50=1.34%, 100=97.78%, 250=0.89% 00:17:01.101 cpu : usr=0.38%, sys=3.91%, ctx=2384, majf=0, minf=4097 00:17:01.101 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:17:01.101 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:01.101 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:01.101 issued rwts: total=9717,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:01.101 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:01.101 job3: (groupid=0, jobs=1): err= 0: pid=84546: Mon Apr 15 16:09:29 2024 00:17:01.101 read: IOPS=522, BW=131MiB/s (137MB/s)(1319MiB/10098msec) 00:17:01.101 slat (usec): min=19, max=34275, avg=1875.56, stdev=4373.19 00:17:01.101 clat (msec): min=12, max=211, avg=120.54, stdev=14.34 00:17:01.101 lat (msec): min=13, max=211, avg=122.42, stdev=14.85 00:17:01.101 clat percentiles (msec): 00:17:01.101 | 1.00th=[ 69], 5.00th=[ 89], 10.00th=[ 106], 20.00th=[ 118], 00:17:01.101 | 30.00th=[ 121], 40.00th=[ 122], 50.00th=[ 124], 60.00th=[ 125], 00:17:01.101 | 70.00th=[ 126], 80.00th=[ 128], 90.00th=[ 131], 95.00th=[ 136], 00:17:01.101 | 99.00th=[ 146], 99.50th=[ 159], 99.90th=[ 192], 99.95th=[ 209], 00:17:01.101 | 99.99th=[ 213] 00:17:01.101 bw ( KiB/s): min=124416, max=172032, per=6.49%, avg=133359.60, stdev=10808.76, samples=20 00:17:01.101 iops : min= 486, max= 672, avg=520.80, stdev=42.15, samples=20 00:17:01.101 lat (msec) : 20=0.02%, 50=0.23%, 100=8.40%, 250=91.35% 00:17:01.101 cpu : usr=0.29%, sys=2.18%, ctx=1407, majf=0, minf=4097 00:17:01.101 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:17:01.101 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:01.101 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:01.101 issued rwts: total=5274,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:01.101 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:01.101 job4: (groupid=0, jobs=1): err= 0: pid=84547: Mon Apr 15 16:09:29 2024 00:17:01.101 read: IOPS=676, BW=169MiB/s (177MB/s)(1706MiB/10087msec) 00:17:01.101 slat (usec): min=22, max=58704, avg=1461.00, stdev=3089.94 00:17:01.101 clat (msec): min=57, max=187, avg=93.04, stdev= 9.24 00:17:01.101 lat (msec): min=57, max=187, avg=94.50, stdev= 9.30 00:17:01.101 clat percentiles (msec): 00:17:01.101 | 1.00th=[ 79], 5.00th=[ 84], 10.00th=[ 86], 20.00th=[ 88], 00:17:01.101 | 30.00th=[ 89], 40.00th=[ 90], 50.00th=[ 92], 60.00th=[ 93], 00:17:01.101 | 70.00th=[ 95], 80.00th=[ 97], 90.00th=[ 103], 95.00th=[ 110], 00:17:01.101 | 99.00th=[ 126], 99.50th=[ 130], 99.90th=[ 171], 99.95th=[ 182], 00:17:01.101 | 99.99th=[ 188] 00:17:01.101 bw ( KiB/s): min=147672, max=181397, per=8.42%, avg=173077.00, stdev=9432.77, samples=20 00:17:01.101 iops : min= 576, max= 708, avg=675.95, stdev=37.01, samples=20 00:17:01.101 lat (msec) : 100=86.83%, 250=13.17% 00:17:01.101 cpu : usr=0.29%, sys=2.40%, ctx=1999, majf=0, minf=4097 00:17:01.101 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:17:01.101 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:01.101 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:01.101 issued rwts: total=6825,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:01.101 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:01.101 job5: (groupid=0, jobs=1): err= 0: pid=84548: Mon Apr 15 16:09:29 2024 00:17:01.101 read: IOPS=968, BW=242MiB/s (254MB/s)(2427MiB/10022msec) 00:17:01.101 slat (usec): min=18, max=56314, avg=1024.97, stdev=2260.66 00:17:01.101 clat (msec): min=20, max=130, avg=65.01, stdev= 9.41 00:17:01.101 lat (msec): min=24, max=130, avg=66.03, stdev= 9.47 00:17:01.101 clat percentiles (msec): 00:17:01.101 | 1.00th=[ 50], 5.00th=[ 54], 10.00th=[ 56], 20.00th=[ 59], 00:17:01.101 | 30.00th=[ 61], 40.00th=[ 63], 50.00th=[ 64], 60.00th=[ 66], 00:17:01.101 | 70.00th=[ 68], 80.00th=[ 70], 90.00th=[ 75], 95.00th=[ 84], 00:17:01.101 | 99.00th=[ 102], 99.50th=[ 109], 99.90th=[ 112], 99.95th=[ 112], 00:17:01.101 | 99.99th=[ 131] 00:17:01.101 bw ( KiB/s): min=157696, max=269312, per=11.99%, avg=246303.26, stdev=26216.85, samples=19 00:17:01.101 iops : min= 616, max= 1052, avg=962.05, stdev=102.41, samples=19 00:17:01.101 lat (msec) : 50=1.37%, 100=97.56%, 250=1.07% 00:17:01.101 cpu : usr=0.44%, sys=3.69%, ctx=2452, majf=0, minf=4097 00:17:01.102 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:17:01.102 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:01.102 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:01.102 issued rwts: total=9707,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:01.102 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:01.102 job6: (groupid=0, jobs=1): err= 0: pid=84549: Mon Apr 15 16:09:29 2024 00:17:01.102 read: IOPS=678, BW=170MiB/s (178MB/s)(1709MiB/10070msec) 00:17:01.102 slat (usec): min=17, max=30380, avg=1460.26, stdev=3035.55 00:17:01.102 clat (msec): min=19, max=185, avg=92.78, stdev=10.27 00:17:01.102 lat (msec): min=19, max=185, avg=94.24, stdev=10.37 00:17:01.102 clat percentiles (msec): 00:17:01.102 | 1.00th=[ 63], 5.00th=[ 83], 10.00th=[ 85], 20.00th=[ 87], 00:17:01.102 | 30.00th=[ 89], 40.00th=[ 90], 50.00th=[ 92], 60.00th=[ 94], 00:17:01.102 | 70.00th=[ 95], 80.00th=[ 99], 90.00th=[ 104], 95.00th=[ 110], 00:17:01.102 | 99.00th=[ 122], 99.50th=[ 125], 99.90th=[ 174], 99.95th=[ 174], 00:17:01.102 | 99.99th=[ 186] 00:17:01.102 bw ( KiB/s): min=147456, max=182272, per=8.48%, avg=174240.58, stdev=8254.62, samples=19 00:17:01.102 iops : min= 576, max= 712, avg=680.53, stdev=32.25, samples=19 00:17:01.102 lat (msec) : 20=0.06%, 50=0.56%, 100=85.15%, 250=14.24% 00:17:01.102 cpu : usr=0.29%, sys=2.87%, ctx=1725, majf=0, minf=4097 00:17:01.102 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:17:01.102 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:01.102 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:01.102 issued rwts: total=6834,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:01.102 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:01.102 job7: (groupid=0, jobs=1): err= 0: pid=84550: Mon Apr 15 16:09:29 2024 00:17:01.102 read: IOPS=516, BW=129MiB/s (135MB/s)(1301MiB/10084msec) 00:17:01.102 slat (usec): min=20, max=39649, avg=1901.46, stdev=4281.69 00:17:01.102 clat (msec): min=40, max=201, avg=122.03, stdev=11.40 00:17:01.102 lat (msec): min=40, max=201, avg=123.94, stdev=11.86 00:17:01.102 clat percentiles (msec): 00:17:01.102 | 1.00th=[ 74], 5.00th=[ 106], 10.00th=[ 116], 20.00th=[ 120], 00:17:01.102 | 30.00th=[ 121], 40.00th=[ 122], 50.00th=[ 123], 60.00th=[ 125], 00:17:01.102 | 70.00th=[ 126], 80.00th=[ 128], 90.00th=[ 131], 95.00th=[ 136], 00:17:01.102 | 99.00th=[ 144], 99.50th=[ 148], 99.90th=[ 194], 99.95th=[ 197], 00:17:01.102 | 99.99th=[ 203] 00:17:01.102 bw ( KiB/s): min=126464, max=151040, per=6.41%, avg=131596.70, stdev=5699.31, samples=20 00:17:01.102 iops : min= 494, max= 590, avg=514.00, stdev=22.29, samples=20 00:17:01.102 lat (msec) : 50=0.38%, 100=3.46%, 250=96.16% 00:17:01.102 cpu : usr=0.16%, sys=1.82%, ctx=1451, majf=0, minf=4097 00:17:01.102 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:17:01.102 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:01.102 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:01.102 issued rwts: total=5204,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:01.102 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:01.102 job8: (groupid=0, jobs=1): err= 0: pid=84551: Mon Apr 15 16:09:29 2024 00:17:01.102 read: IOPS=511, BW=128MiB/s (134MB/s)(1289MiB/10081msec) 00:17:01.102 slat (usec): min=18, max=73040, avg=1934.25, stdev=4513.89 00:17:01.102 clat (msec): min=79, max=203, avg=123.12, stdev= 9.33 00:17:01.102 lat (msec): min=82, max=203, avg=125.05, stdev= 9.76 00:17:01.102 clat percentiles (msec): 00:17:01.102 | 1.00th=[ 91], 5.00th=[ 110], 10.00th=[ 116], 20.00th=[ 120], 00:17:01.102 | 30.00th=[ 121], 40.00th=[ 123], 50.00th=[ 124], 60.00th=[ 125], 00:17:01.102 | 70.00th=[ 127], 80.00th=[ 129], 90.00th=[ 132], 95.00th=[ 136], 00:17:01.102 | 99.00th=[ 146], 99.50th=[ 161], 99.90th=[ 201], 99.95th=[ 201], 00:17:01.102 | 99.99th=[ 203] 00:17:01.102 bw ( KiB/s): min=124416, max=147161, per=6.35%, avg=130490.16, stdev=5228.67, samples=19 00:17:01.102 iops : min= 486, max= 574, avg=509.63, stdev=20.25, samples=19 00:17:01.102 lat (msec) : 100=2.50%, 250=97.50% 00:17:01.102 cpu : usr=0.24%, sys=1.99%, ctx=1449, majf=0, minf=4097 00:17:01.102 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:17:01.102 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:01.102 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:01.102 issued rwts: total=5156,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:01.102 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:01.102 job9: (groupid=0, jobs=1): err= 0: pid=84552: Mon Apr 15 16:09:29 2024 00:17:01.102 read: IOPS=1013, BW=253MiB/s (266MB/s)(2536MiB/10011msec) 00:17:01.102 slat (usec): min=21, max=24881, avg=981.17, stdev=2096.84 00:17:01.102 clat (msec): min=10, max=101, avg=62.14, stdev= 6.81 00:17:01.102 lat (msec): min=12, max=101, avg=63.12, stdev= 6.82 00:17:01.102 clat percentiles (usec): 00:17:01.102 | 1.00th=[46924], 5.00th=[52167], 10.00th=[54789], 20.00th=[57410], 00:17:01.102 | 30.00th=[58983], 40.00th=[60556], 50.00th=[62129], 60.00th=[63701], 00:17:01.102 | 70.00th=[64750], 80.00th=[66847], 90.00th=[69731], 95.00th=[72877], 00:17:01.102 | 99.00th=[80217], 99.50th=[85459], 99.90th=[93848], 99.95th=[94897], 00:17:01.102 | 99.99th=[94897] 00:17:01.102 bw ( KiB/s): min=231424, max=272351, per=12.56%, avg=258127.32, stdev=9055.38, samples=19 00:17:01.102 iops : min= 904, max= 1063, avg=1008.21, stdev=35.30, samples=19 00:17:01.102 lat (msec) : 20=0.18%, 50=2.09%, 100=97.72%, 250=0.01% 00:17:01.102 cpu : usr=0.39%, sys=3.31%, ctx=2613, majf=0, minf=4097 00:17:01.102 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:17:01.102 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:01.102 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:01.102 issued rwts: total=10145,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:01.102 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:01.102 job10: (groupid=0, jobs=1): err= 0: pid=84553: Mon Apr 15 16:09:29 2024 00:17:01.102 read: IOPS=1013, BW=253MiB/s (266MB/s)(2540MiB/10026msec) 00:17:01.102 slat (usec): min=19, max=39095, avg=979.75, stdev=2133.18 00:17:01.102 clat (usec): min=13636, max=98676, avg=62123.27, stdev=6612.27 00:17:01.102 lat (usec): min=14578, max=98714, avg=63103.02, stdev=6633.09 00:17:01.102 clat percentiles (usec): 00:17:01.102 | 1.00th=[45876], 5.00th=[52167], 10.00th=[54789], 20.00th=[57410], 00:17:01.102 | 30.00th=[58983], 40.00th=[60556], 50.00th=[62129], 60.00th=[63701], 00:17:01.102 | 70.00th=[65274], 80.00th=[67634], 90.00th=[69731], 95.00th=[71828], 00:17:01.102 | 99.00th=[78119], 99.50th=[81265], 99.90th=[85459], 99.95th=[93848], 00:17:01.102 | 99.99th=[99091] 00:17:01.102 bw ( KiB/s): min=235008, max=277461, per=12.58%, avg=258353.05, stdev=8821.71, samples=20 00:17:01.102 iops : min= 918, max= 1083, avg=1009.15, stdev=34.37, samples=20 00:17:01.102 lat (msec) : 20=0.02%, 50=2.84%, 100=97.15% 00:17:01.102 cpu : usr=0.53%, sys=3.78%, ctx=2650, majf=0, minf=4097 00:17:01.102 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:17:01.102 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:01.102 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:01.102 issued rwts: total=10158,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:01.102 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:01.102 00:17:01.102 Run status group 0 (all jobs): 00:17:01.102 READ: bw=2006MiB/s (2104MB/s), 128MiB/s-253MiB/s (134MB/s-266MB/s), io=19.8GiB (21.2GB), run=10011-10098msec 00:17:01.102 00:17:01.102 Disk stats (read/write): 00:17:01.102 nvme0n1: ios=10216/0, merge=0/0, ticks=1219405/0, in_queue=1219405, util=97.34% 00:17:01.102 nvme10n1: ios=13445/0, merge=0/0, ticks=1222087/0, in_queue=1222087, util=97.57% 00:17:01.102 nvme1n1: ios=19208/0, merge=0/0, ticks=1229443/0, in_queue=1229443, util=97.80% 00:17:01.102 nvme2n1: ios=10373/0, merge=0/0, ticks=1221017/0, in_queue=1221017, util=97.94% 00:17:01.102 nvme3n1: ios=13478/0, merge=0/0, ticks=1223464/0, in_queue=1223464, util=98.00% 00:17:01.102 nvme4n1: ios=19161/0, merge=0/0, ticks=1227415/0, in_queue=1227415, util=98.11% 00:17:01.102 nvme5n1: ios=13473/0, merge=0/0, ticks=1220467/0, in_queue=1220467, util=98.27% 00:17:01.102 nvme6n1: ios=10225/0, merge=0/0, ticks=1220130/0, in_queue=1220130, util=98.42% 00:17:01.102 nvme7n1: ios=10124/0, merge=0/0, ticks=1217755/0, in_queue=1217755, util=98.74% 00:17:01.102 nvme8n1: ios=20033/0, merge=0/0, ticks=1227402/0, in_queue=1227402, util=98.91% 00:17:01.102 nvme9n1: ios=20109/0, merge=0/0, ticks=1230057/0, in_queue=1230057, util=99.23% 00:17:01.102 16:09:29 -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:17:01.102 [global] 00:17:01.102 thread=1 00:17:01.102 invalidate=1 00:17:01.102 rw=randwrite 00:17:01.102 time_based=1 00:17:01.102 runtime=10 00:17:01.102 ioengine=libaio 00:17:01.102 direct=1 00:17:01.102 bs=262144 00:17:01.102 iodepth=64 00:17:01.102 norandommap=1 00:17:01.102 numjobs=1 00:17:01.102 00:17:01.102 [job0] 00:17:01.102 filename=/dev/nvme0n1 00:17:01.102 [job1] 00:17:01.102 filename=/dev/nvme10n1 00:17:01.102 [job2] 00:17:01.102 filename=/dev/nvme1n1 00:17:01.102 [job3] 00:17:01.102 filename=/dev/nvme2n1 00:17:01.102 [job4] 00:17:01.102 filename=/dev/nvme3n1 00:17:01.102 [job5] 00:17:01.102 filename=/dev/nvme4n1 00:17:01.102 [job6] 00:17:01.102 filename=/dev/nvme5n1 00:17:01.102 [job7] 00:17:01.102 filename=/dev/nvme6n1 00:17:01.102 [job8] 00:17:01.102 filename=/dev/nvme7n1 00:17:01.102 [job9] 00:17:01.102 filename=/dev/nvme8n1 00:17:01.102 [job10] 00:17:01.102 filename=/dev/nvme9n1 00:17:01.102 Could not set queue depth (nvme0n1) 00:17:01.102 Could not set queue depth (nvme10n1) 00:17:01.102 Could not set queue depth (nvme1n1) 00:17:01.102 Could not set queue depth (nvme2n1) 00:17:01.102 Could not set queue depth (nvme3n1) 00:17:01.102 Could not set queue depth (nvme4n1) 00:17:01.102 Could not set queue depth (nvme5n1) 00:17:01.102 Could not set queue depth (nvme6n1) 00:17:01.102 Could not set queue depth (nvme7n1) 00:17:01.102 Could not set queue depth (nvme8n1) 00:17:01.102 Could not set queue depth (nvme9n1) 00:17:01.102 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:01.102 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:01.102 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:01.103 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:01.103 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:01.103 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:01.103 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:01.103 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:01.103 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:01.103 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:01.103 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:01.103 fio-3.35 00:17:01.103 Starting 11 threads 00:17:11.092 00:17:11.092 job0: (groupid=0, jobs=1): err= 0: pid=84750: Mon Apr 15 16:09:39 2024 00:17:11.092 write: IOPS=406, BW=102MiB/s (106MB/s)(1029MiB/10135msec); 0 zone resets 00:17:11.092 slat (usec): min=24, max=17923, avg=2424.19, stdev=4157.20 00:17:11.092 clat (msec): min=5, max=285, avg=155.04, stdev=16.32 00:17:11.092 lat (msec): min=5, max=286, avg=157.46, stdev=16.03 00:17:11.092 clat percentiles (msec): 00:17:11.092 | 1.00th=[ 75], 5.00th=[ 144], 10.00th=[ 148], 20.00th=[ 150], 00:17:11.092 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 159], 60.00th=[ 159], 00:17:11.092 | 70.00th=[ 161], 80.00th=[ 161], 90.00th=[ 163], 95.00th=[ 163], 00:17:11.092 | 99.00th=[ 190], 99.50th=[ 239], 99.90th=[ 275], 99.95th=[ 279], 00:17:11.092 | 99.99th=[ 288] 00:17:11.092 bw ( KiB/s): min=100352, max=113152, per=7.03%, avg=103772.30, stdev=2968.87, samples=20 00:17:11.092 iops : min= 392, max= 442, avg=405.35, stdev=11.61, samples=20 00:17:11.092 lat (msec) : 10=0.02%, 20=0.10%, 50=0.58%, 100=0.68%, 250=98.28% 00:17:11.092 lat (msec) : 500=0.34% 00:17:11.092 cpu : usr=0.92%, sys=1.11%, ctx=6491, majf=0, minf=1 00:17:11.092 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:17:11.092 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:11.092 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:11.092 issued rwts: total=0,4117,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:11.092 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:11.092 job1: (groupid=0, jobs=1): err= 0: pid=84762: Mon Apr 15 16:09:39 2024 00:17:11.092 write: IOPS=403, BW=101MiB/s (106MB/s)(1022MiB/10142msec); 0 zone resets 00:17:11.092 slat (usec): min=24, max=40963, avg=2441.42, stdev=4199.67 00:17:11.092 clat (msec): min=43, max=288, avg=156.02, stdev=12.98 00:17:11.092 lat (msec): min=43, max=288, avg=158.46, stdev=12.44 00:17:11.092 clat percentiles (msec): 00:17:11.092 | 1.00th=[ 131], 5.00th=[ 146], 10.00th=[ 148], 20.00th=[ 150], 00:17:11.092 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 159], 00:17:11.092 | 70.00th=[ 161], 80.00th=[ 161], 90.00th=[ 163], 95.00th=[ 163], 00:17:11.092 | 99.00th=[ 192], 99.50th=[ 241], 99.90th=[ 279], 99.95th=[ 279], 00:17:11.092 | 99.99th=[ 288] 00:17:11.092 bw ( KiB/s): min=94208, max=108544, per=6.98%, avg=103019.50, stdev=2688.91, samples=20 00:17:11.092 iops : min= 368, max= 424, avg=402.40, stdev=10.51, samples=20 00:17:11.092 lat (msec) : 50=0.10%, 100=0.59%, 250=98.97%, 500=0.34% 00:17:11.092 cpu : usr=1.01%, sys=1.27%, ctx=6911, majf=0, minf=1 00:17:11.092 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:17:11.092 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:11.092 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:11.092 issued rwts: total=0,4088,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:11.092 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:11.092 job2: (groupid=0, jobs=1): err= 0: pid=84763: Mon Apr 15 16:09:39 2024 00:17:11.092 write: IOPS=401, BW=100MiB/s (105MB/s)(1017MiB/10128msec); 0 zone resets 00:17:11.092 slat (usec): min=23, max=75243, avg=2454.66, stdev=4326.07 00:17:11.092 clat (msec): min=77, max=274, avg=156.83, stdev=10.85 00:17:11.092 lat (msec): min=77, max=274, avg=159.29, stdev=10.13 00:17:11.092 clat percentiles (msec): 00:17:11.092 | 1.00th=[ 140], 5.00th=[ 146], 10.00th=[ 148], 20.00th=[ 150], 00:17:11.092 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 159], 00:17:11.092 | 70.00th=[ 161], 80.00th=[ 161], 90.00th=[ 163], 95.00th=[ 165], 00:17:11.092 | 99.00th=[ 205], 99.50th=[ 226], 99.90th=[ 266], 99.95th=[ 266], 00:17:11.092 | 99.99th=[ 275] 00:17:11.092 bw ( KiB/s): min=84136, max=106496, per=6.94%, avg=102526.15, stdev=4604.15, samples=20 00:17:11.092 iops : min= 328, max= 416, avg=400.45, stdev=18.12, samples=20 00:17:11.092 lat (msec) : 100=0.29%, 250=99.46%, 500=0.25% 00:17:11.092 cpu : usr=0.88%, sys=1.10%, ctx=8230, majf=0, minf=1 00:17:11.092 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:17:11.092 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:11.092 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:11.092 issued rwts: total=0,4068,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:11.092 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:11.092 job3: (groupid=0, jobs=1): err= 0: pid=84764: Mon Apr 15 16:09:39 2024 00:17:11.092 write: IOPS=697, BW=174MiB/s (183MB/s)(1759MiB/10082msec); 0 zone resets 00:17:11.092 slat (usec): min=25, max=12420, avg=1401.61, stdev=2364.49 00:17:11.092 clat (msec): min=3, max=166, avg=90.27, stdev= 7.63 00:17:11.092 lat (msec): min=5, max=166, avg=91.67, stdev= 7.41 00:17:11.092 clat percentiles (msec): 00:17:11.092 | 1.00th=[ 63], 5.00th=[ 85], 10.00th=[ 86], 20.00th=[ 88], 00:17:11.092 | 30.00th=[ 89], 40.00th=[ 91], 50.00th=[ 92], 60.00th=[ 92], 00:17:11.092 | 70.00th=[ 93], 80.00th=[ 93], 90.00th=[ 94], 95.00th=[ 95], 00:17:11.092 | 99.00th=[ 103], 99.50th=[ 117], 99.90th=[ 157], 99.95th=[ 161], 00:17:11.092 | 99.99th=[ 167] 00:17:11.092 bw ( KiB/s): min=175104, max=186880, per=12.09%, avg=178517.50, stdev=3166.34, samples=20 00:17:11.092 iops : min= 684, max= 730, avg=697.25, stdev=12.38, samples=20 00:17:11.092 lat (msec) : 4=0.01%, 10=0.04%, 20=0.18%, 50=0.40%, 100=98.14% 00:17:11.092 lat (msec) : 250=1.22% 00:17:11.092 cpu : usr=1.38%, sys=2.07%, ctx=16155, majf=0, minf=1 00:17:11.092 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:17:11.092 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:11.092 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:11.092 issued rwts: total=0,7037,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:11.092 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:11.092 job4: (groupid=0, jobs=1): err= 0: pid=84765: Mon Apr 15 16:09:39 2024 00:17:11.092 write: IOPS=406, BW=102MiB/s (106MB/s)(1029MiB/10137msec); 0 zone resets 00:17:11.092 slat (usec): min=20, max=12455, avg=2425.38, stdev=4149.61 00:17:11.092 clat (msec): min=14, max=285, avg=155.14, stdev=15.74 00:17:11.092 lat (msec): min=14, max=285, avg=157.57, stdev=15.42 00:17:11.092 clat percentiles (msec): 00:17:11.092 | 1.00th=[ 81], 5.00th=[ 146], 10.00th=[ 148], 20.00th=[ 150], 00:17:11.092 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 159], 00:17:11.092 | 70.00th=[ 161], 80.00th=[ 161], 90.00th=[ 161], 95.00th=[ 163], 00:17:11.092 | 99.00th=[ 188], 99.50th=[ 236], 99.90th=[ 275], 99.95th=[ 275], 00:17:11.092 | 99.99th=[ 284] 00:17:11.092 bw ( KiB/s): min=102195, max=108544, per=7.02%, avg=103746.55, stdev=1933.02, samples=20 00:17:11.092 iops : min= 399, max= 424, avg=405.25, stdev= 7.56, samples=20 00:17:11.092 lat (msec) : 20=0.10%, 50=0.49%, 100=0.68%, 250=98.40%, 500=0.34% 00:17:11.092 cpu : usr=0.92%, sys=1.24%, ctx=7614, majf=0, minf=1 00:17:11.092 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:17:11.092 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:11.093 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:11.093 issued rwts: total=0,4116,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:11.093 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:11.093 job5: (groupid=0, jobs=1): err= 0: pid=84766: Mon Apr 15 16:09:39 2024 00:17:11.093 write: IOPS=404, BW=101MiB/s (106MB/s)(1026MiB/10138msec); 0 zone resets 00:17:11.093 slat (usec): min=26, max=26080, avg=2431.61, stdev=4172.30 00:17:11.093 clat (msec): min=19, max=291, avg=155.60, stdev=15.59 00:17:11.093 lat (msec): min=19, max=291, avg=158.03, stdev=15.25 00:17:11.093 clat percentiles (msec): 00:17:11.093 | 1.00th=[ 85], 5.00th=[ 146], 10.00th=[ 148], 20.00th=[ 150], 00:17:11.093 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 159], 00:17:11.093 | 70.00th=[ 161], 80.00th=[ 161], 90.00th=[ 163], 95.00th=[ 163], 00:17:11.093 | 99.00th=[ 194], 99.50th=[ 243], 99.90th=[ 279], 99.95th=[ 284], 00:17:11.093 | 99.99th=[ 292] 00:17:11.093 bw ( KiB/s): min=102195, max=106496, per=7.00%, avg=103429.00, stdev=1443.02, samples=20 00:17:11.093 iops : min= 399, max= 416, avg=404.00, stdev= 5.65, samples=20 00:17:11.093 lat (msec) : 20=0.10%, 50=0.49%, 100=0.58%, 250=98.39%, 500=0.44% 00:17:11.093 cpu : usr=0.83%, sys=1.07%, ctx=8913, majf=0, minf=1 00:17:11.093 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:17:11.093 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:11.093 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:11.093 issued rwts: total=0,4104,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:11.093 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:11.093 job6: (groupid=0, jobs=1): err= 0: pid=84767: Mon Apr 15 16:09:39 2024 00:17:11.093 write: IOPS=403, BW=101MiB/s (106MB/s)(1022MiB/10130msec); 0 zone resets 00:17:11.093 slat (usec): min=25, max=53308, avg=2440.49, stdev=4226.96 00:17:11.093 clat (msec): min=56, max=282, avg=156.10, stdev=12.17 00:17:11.093 lat (msec): min=56, max=282, avg=158.54, stdev=11.61 00:17:11.093 clat percentiles (msec): 00:17:11.093 | 1.00th=[ 127], 5.00th=[ 146], 10.00th=[ 148], 20.00th=[ 150], 00:17:11.093 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 159], 00:17:11.093 | 70.00th=[ 161], 80.00th=[ 161], 90.00th=[ 163], 95.00th=[ 163], 00:17:11.093 | 99.00th=[ 186], 99.50th=[ 234], 99.90th=[ 275], 99.95th=[ 275], 00:17:11.093 | 99.99th=[ 284] 00:17:11.093 bw ( KiB/s): min=98107, max=108544, per=6.98%, avg=103019.90, stdev=2331.97, samples=20 00:17:11.093 iops : min= 383, max= 424, avg=402.40, stdev= 9.14, samples=20 00:17:11.093 lat (msec) : 100=0.59%, 250=99.07%, 500=0.34% 00:17:11.093 cpu : usr=1.01%, sys=1.31%, ctx=7471, majf=0, minf=1 00:17:11.093 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:17:11.093 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:11.093 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:11.093 issued rwts: total=0,4088,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:11.093 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:11.093 job7: (groupid=0, jobs=1): err= 0: pid=84768: Mon Apr 15 16:09:39 2024 00:17:11.093 write: IOPS=1170, BW=293MiB/s (307MB/s)(2942MiB/10048msec); 0 zone resets 00:17:11.093 slat (usec): min=21, max=8012, avg=840.03, stdev=1382.03 00:17:11.093 clat (usec): min=3122, max=98893, avg=53778.28, stdev=3553.96 00:17:11.093 lat (usec): min=4501, max=98933, avg=54618.31, stdev=3411.25 00:17:11.093 clat percentiles (usec): 00:17:11.093 | 1.00th=[47973], 5.00th=[50070], 10.00th=[51119], 20.00th=[52167], 00:17:11.093 | 30.00th=[52691], 40.00th=[53216], 50.00th=[54264], 60.00th=[54789], 00:17:11.093 | 70.00th=[54789], 80.00th=[55313], 90.00th=[56361], 95.00th=[56886], 00:17:11.093 | 99.00th=[58459], 99.50th=[61080], 99.90th=[88605], 99.95th=[94897], 00:17:11.093 | 99.99th=[99091] 00:17:11.093 bw ( KiB/s): min=293888, max=311296, per=20.28%, avg=299537.50, stdev=4628.64, samples=20 00:17:11.093 iops : min= 1148, max= 1216, avg=1170.00, stdev=18.13, samples=20 00:17:11.093 lat (msec) : 4=0.01%, 10=0.05%, 20=0.14%, 50=3.98%, 100=95.83% 00:17:11.093 cpu : usr=2.17%, sys=3.04%, ctx=27131, majf=0, minf=1 00:17:11.093 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:17:11.093 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:11.093 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:11.093 issued rwts: total=0,11766,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:11.093 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:11.093 job8: (groupid=0, jobs=1): err= 0: pid=84769: Mon Apr 15 16:09:39 2024 00:17:11.093 write: IOPS=402, BW=101MiB/s (106MB/s)(1021MiB/10136msec); 0 zone resets 00:17:11.093 slat (usec): min=25, max=52554, avg=2443.42, stdev=4246.39 00:17:11.093 clat (msec): min=11, max=289, avg=156.33, stdev=16.42 00:17:11.093 lat (msec): min=11, max=289, avg=158.78, stdev=16.11 00:17:11.093 clat percentiles (msec): 00:17:11.093 | 1.00th=[ 78], 5.00th=[ 146], 10.00th=[ 148], 20.00th=[ 150], 00:17:11.093 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 161], 00:17:11.093 | 70.00th=[ 161], 80.00th=[ 161], 90.00th=[ 163], 95.00th=[ 165], 00:17:11.093 | 99.00th=[ 194], 99.50th=[ 241], 99.90th=[ 279], 99.95th=[ 279], 00:17:11.093 | 99.99th=[ 292] 00:17:11.093 bw ( KiB/s): min=100352, max=106496, per=6.97%, avg=102917.10, stdev=1471.41, samples=20 00:17:11.093 iops : min= 392, max= 416, avg=402.00, stdev= 5.76, samples=20 00:17:11.093 lat (msec) : 20=0.10%, 50=0.49%, 100=0.78%, 250=98.19%, 500=0.44% 00:17:11.093 cpu : usr=0.76%, sys=1.15%, ctx=7434, majf=0, minf=1 00:17:11.093 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:17:11.093 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:11.093 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:11.093 issued rwts: total=0,4084,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:11.093 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:11.093 job9: (groupid=0, jobs=1): err= 0: pid=84770: Mon Apr 15 16:09:39 2024 00:17:11.093 write: IOPS=693, BW=173MiB/s (182MB/s)(1748MiB/10076msec); 0 zone resets 00:17:11.093 slat (usec): min=28, max=14449, avg=1424.72, stdev=2384.59 00:17:11.093 clat (msec): min=18, max=159, avg=90.77, stdev= 6.20 00:17:11.093 lat (msec): min=18, max=159, avg=92.19, stdev= 5.85 00:17:11.093 clat percentiles (msec): 00:17:11.093 | 1.00th=[ 82], 5.00th=[ 86], 10.00th=[ 87], 20.00th=[ 88], 00:17:11.093 | 30.00th=[ 89], 40.00th=[ 91], 50.00th=[ 92], 60.00th=[ 92], 00:17:11.093 | 70.00th=[ 93], 80.00th=[ 94], 90.00th=[ 94], 95.00th=[ 95], 00:17:11.093 | 99.00th=[ 111], 99.50th=[ 121], 99.90th=[ 148], 99.95th=[ 155], 00:17:11.093 | 99.99th=[ 159] 00:17:11.093 bw ( KiB/s): min=161792, max=184832, per=12.01%, avg=177364.75, stdev=4676.95, samples=20 00:17:11.093 iops : min= 632, max= 722, avg=692.80, stdev=18.28, samples=20 00:17:11.093 lat (msec) : 20=0.06%, 50=0.23%, 100=97.90%, 250=1.82% 00:17:11.093 cpu : usr=1.56%, sys=2.12%, ctx=15921, majf=0, minf=1 00:17:11.093 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:17:11.093 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:11.093 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:11.093 issued rwts: total=0,6992,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:11.093 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:11.093 job10: (groupid=0, jobs=1): err= 0: pid=84771: Mon Apr 15 16:09:39 2024 00:17:11.093 write: IOPS=399, BW=100.0MiB/s (105MB/s)(1012MiB/10124msec); 0 zone resets 00:17:11.093 slat (usec): min=25, max=77730, avg=2453.08, stdev=4376.48 00:17:11.093 clat (msec): min=40, max=277, avg=157.54, stdev=11.92 00:17:11.093 lat (msec): min=42, max=277, avg=160.00, stdev=11.29 00:17:11.093 clat percentiles (msec): 00:17:11.093 | 1.00th=[ 136], 5.00th=[ 146], 10.00th=[ 150], 20.00th=[ 153], 00:17:11.093 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 161], 00:17:11.093 | 70.00th=[ 161], 80.00th=[ 161], 90.00th=[ 163], 95.00th=[ 169], 00:17:11.093 | 99.00th=[ 201], 99.50th=[ 228], 99.90th=[ 268], 99.95th=[ 268], 00:17:11.093 | 99.99th=[ 279] 00:17:11.093 bw ( KiB/s): min=81920, max=106496, per=6.91%, avg=102005.75, stdev=4982.48, samples=20 00:17:11.093 iops : min= 320, max= 416, avg=398.45, stdev=19.46, samples=20 00:17:11.093 lat (msec) : 50=0.12%, 100=0.30%, 250=99.33%, 500=0.25% 00:17:11.093 cpu : usr=1.00%, sys=1.22%, ctx=5621, majf=0, minf=1 00:17:11.093 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:17:11.093 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:11.093 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:11.093 issued rwts: total=0,4048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:11.093 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:11.093 00:17:11.093 Run status group 0 (all jobs): 00:17:11.093 WRITE: bw=1442MiB/s (1512MB/s), 100.0MiB/s-293MiB/s (105MB/s-307MB/s), io=14.3GiB (15.3GB), run=10048-10142msec 00:17:11.093 00:17:11.093 Disk stats (read/write): 00:17:11.093 nvme0n1: ios=49/8232, merge=0/0, ticks=41/1231402, in_queue=1231443, util=96.99% 00:17:11.093 nvme10n1: ios=35/7963, merge=0/0, ticks=34/1199811, in_queue=1199845, util=97.15% 00:17:11.093 nvme1n1: ios=0/8125, merge=0/0, ticks=0/1231734, in_queue=1231734, util=97.32% 00:17:11.093 nvme2n1: ios=0/13797, merge=0/0, ticks=0/1204981, in_queue=1204981, util=97.63% 00:17:11.093 nvme3n1: ios=0/8014, merge=0/0, ticks=0/1201153, in_queue=1201153, util=97.64% 00:17:11.093 nvme4n1: ios=0/8000, merge=0/0, ticks=0/1201155, in_queue=1201155, util=98.00% 00:17:11.093 nvme5n1: ios=0/8169, merge=0/0, ticks=0/1229692, in_queue=1229692, util=98.03% 00:17:11.093 nvme6n1: ios=0/23160, merge=0/0, ticks=0/1206483, in_queue=1206483, util=98.19% 00:17:11.093 nvme7n1: ios=0/7960, merge=0/0, ticks=0/1200699, in_queue=1200699, util=98.59% 00:17:11.093 nvme8n1: ios=0/13684, merge=0/0, ticks=0/1202503, in_queue=1202503, util=98.59% 00:17:11.093 nvme9n1: ios=0/8081, merge=0/0, ticks=0/1229325, in_queue=1229325, util=98.62% 00:17:11.093 16:09:39 -- target/multiconnection.sh@36 -- # sync 00:17:11.093 16:09:39 -- target/multiconnection.sh@37 -- # seq 1 11 00:17:11.093 16:09:39 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:11.093 16:09:39 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:11.093 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:11.093 16:09:40 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:17:11.093 16:09:40 -- common/autotest_common.sh@1205 -- # local i=0 00:17:11.093 16:09:40 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:17:11.094 16:09:40 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:17:11.094 16:09:40 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:17:11.094 16:09:40 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK1 00:17:11.094 16:09:40 -- common/autotest_common.sh@1217 -- # return 0 00:17:11.094 16:09:40 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:11.094 16:09:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:11.094 16:09:40 -- common/autotest_common.sh@10 -- # set +x 00:17:11.094 16:09:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:11.094 16:09:40 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:11.094 16:09:40 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:17:11.094 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:17:11.094 16:09:40 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:17:11.094 16:09:40 -- common/autotest_common.sh@1205 -- # local i=0 00:17:11.094 16:09:40 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:17:11.094 16:09:40 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:17:11.094 16:09:40 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:17:11.094 16:09:40 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK2 00:17:11.094 16:09:40 -- common/autotest_common.sh@1217 -- # return 0 00:17:11.094 16:09:40 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:17:11.094 16:09:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:11.094 16:09:40 -- common/autotest_common.sh@10 -- # set +x 00:17:11.094 16:09:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:11.094 16:09:40 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:11.094 16:09:40 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:17:11.094 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:17:11.094 16:09:40 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:17:11.094 16:09:40 -- common/autotest_common.sh@1205 -- # local i=0 00:17:11.094 16:09:40 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:17:11.094 16:09:40 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:17:11.094 16:09:40 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:17:11.094 16:09:40 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK3 00:17:11.094 16:09:40 -- common/autotest_common.sh@1217 -- # return 0 00:17:11.094 16:09:40 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:17:11.094 16:09:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:11.094 16:09:40 -- common/autotest_common.sh@10 -- # set +x 00:17:11.094 16:09:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:11.094 16:09:40 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:11.094 16:09:40 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:17:11.094 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:17:11.094 16:09:40 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:17:11.094 16:09:40 -- common/autotest_common.sh@1205 -- # local i=0 00:17:11.094 16:09:40 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:17:11.094 16:09:40 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:17:11.094 16:09:40 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:17:11.094 16:09:40 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK4 00:17:11.094 16:09:40 -- common/autotest_common.sh@1217 -- # return 0 00:17:11.094 16:09:40 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:17:11.094 16:09:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:11.094 16:09:40 -- common/autotest_common.sh@10 -- # set +x 00:17:11.094 16:09:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:11.094 16:09:40 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:11.094 16:09:40 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:17:11.094 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:17:11.094 16:09:40 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:17:11.094 16:09:40 -- common/autotest_common.sh@1205 -- # local i=0 00:17:11.094 16:09:40 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:17:11.094 16:09:40 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:17:11.094 16:09:40 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:17:11.094 16:09:40 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK5 00:17:11.094 16:09:40 -- common/autotest_common.sh@1217 -- # return 0 00:17:11.094 16:09:40 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:17:11.094 16:09:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:11.094 16:09:40 -- common/autotest_common.sh@10 -- # set +x 00:17:11.094 16:09:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:11.094 16:09:40 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:11.094 16:09:40 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:17:11.094 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:17:11.094 16:09:40 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:17:11.094 16:09:40 -- common/autotest_common.sh@1205 -- # local i=0 00:17:11.094 16:09:40 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:17:11.094 16:09:40 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:17:11.094 16:09:40 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:17:11.094 16:09:40 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK6 00:17:11.094 16:09:40 -- common/autotest_common.sh@1217 -- # return 0 00:17:11.094 16:09:40 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:17:11.094 16:09:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:11.094 16:09:40 -- common/autotest_common.sh@10 -- # set +x 00:17:11.094 16:09:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:11.094 16:09:40 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:11.094 16:09:40 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:17:11.094 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:17:11.094 16:09:40 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:17:11.094 16:09:40 -- common/autotest_common.sh@1205 -- # local i=0 00:17:11.094 16:09:40 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:17:11.094 16:09:40 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:17:11.094 16:09:40 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:17:11.094 16:09:40 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK7 00:17:11.094 16:09:40 -- common/autotest_common.sh@1217 -- # return 0 00:17:11.094 16:09:40 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:17:11.094 16:09:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:11.094 16:09:40 -- common/autotest_common.sh@10 -- # set +x 00:17:11.094 16:09:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:11.094 16:09:40 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:11.094 16:09:40 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:17:11.094 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:17:11.094 16:09:40 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:17:11.094 16:09:40 -- common/autotest_common.sh@1205 -- # local i=0 00:17:11.094 16:09:40 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:17:11.094 16:09:40 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:17:11.094 16:09:40 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK8 00:17:11.094 16:09:40 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:17:11.094 16:09:40 -- common/autotest_common.sh@1217 -- # return 0 00:17:11.094 16:09:40 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:17:11.094 16:09:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:11.094 16:09:40 -- common/autotest_common.sh@10 -- # set +x 00:17:11.094 16:09:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:11.094 16:09:40 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:11.094 16:09:40 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:17:11.094 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:17:11.094 16:09:40 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:17:11.094 16:09:40 -- common/autotest_common.sh@1205 -- # local i=0 00:17:11.094 16:09:40 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:17:11.094 16:09:40 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:17:11.094 16:09:40 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK9 00:17:11.094 16:09:40 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:17:11.094 16:09:40 -- common/autotest_common.sh@1217 -- # return 0 00:17:11.094 16:09:40 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:17:11.094 16:09:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:11.094 16:09:40 -- common/autotest_common.sh@10 -- # set +x 00:17:11.094 16:09:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:11.094 16:09:40 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:11.094 16:09:40 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:17:11.094 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:17:11.094 16:09:40 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:17:11.094 16:09:40 -- common/autotest_common.sh@1205 -- # local i=0 00:17:11.094 16:09:40 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:17:11.094 16:09:40 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:17:11.094 16:09:40 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK10 00:17:11.094 16:09:40 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:17:11.094 16:09:40 -- common/autotest_common.sh@1217 -- # return 0 00:17:11.094 16:09:40 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:17:11.094 16:09:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:11.094 16:09:40 -- common/autotest_common.sh@10 -- # set +x 00:17:11.094 16:09:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:11.094 16:09:40 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:11.094 16:09:40 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:17:11.094 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:17:11.094 16:09:41 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:17:11.094 16:09:41 -- common/autotest_common.sh@1205 -- # local i=0 00:17:11.094 16:09:41 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:17:11.094 16:09:41 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:17:11.094 16:09:41 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:17:11.094 16:09:41 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK11 00:17:11.352 16:09:41 -- common/autotest_common.sh@1217 -- # return 0 00:17:11.352 16:09:41 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:17:11.352 16:09:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:11.353 16:09:41 -- common/autotest_common.sh@10 -- # set +x 00:17:11.353 16:09:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:11.353 16:09:41 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:17:11.353 16:09:41 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:17:11.353 16:09:41 -- target/multiconnection.sh@47 -- # nvmftestfini 00:17:11.353 16:09:41 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:11.353 16:09:41 -- nvmf/common.sh@117 -- # sync 00:17:11.353 16:09:41 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:11.353 16:09:41 -- nvmf/common.sh@120 -- # set +e 00:17:11.353 16:09:41 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:11.353 16:09:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:11.353 rmmod nvme_tcp 00:17:11.353 rmmod nvme_fabrics 00:17:11.353 16:09:41 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:11.353 16:09:41 -- nvmf/common.sh@124 -- # set -e 00:17:11.353 16:09:41 -- nvmf/common.sh@125 -- # return 0 00:17:11.353 16:09:41 -- nvmf/common.sh@478 -- # '[' -n 84085 ']' 00:17:11.353 16:09:41 -- nvmf/common.sh@479 -- # killprocess 84085 00:17:11.353 16:09:41 -- common/autotest_common.sh@936 -- # '[' -z 84085 ']' 00:17:11.353 16:09:41 -- common/autotest_common.sh@940 -- # kill -0 84085 00:17:11.353 16:09:41 -- common/autotest_common.sh@941 -- # uname 00:17:11.353 16:09:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:11.353 16:09:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84085 00:17:11.353 killing process with pid 84085 00:17:11.353 16:09:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:11.353 16:09:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:11.353 16:09:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84085' 00:17:11.353 16:09:41 -- common/autotest_common.sh@955 -- # kill 84085 00:17:11.353 16:09:41 -- common/autotest_common.sh@960 -- # wait 84085 00:17:11.917 16:09:41 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:11.917 16:09:41 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:11.917 16:09:41 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:11.917 16:09:41 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:11.917 16:09:41 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:11.917 16:09:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:11.917 16:09:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:11.917 16:09:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:11.917 16:09:41 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:11.917 ************************************ 00:17:11.917 END TEST nvmf_multiconnection 00:17:11.917 ************************************ 00:17:11.917 00:17:11.917 real 0m49.315s 00:17:11.917 user 2m41.550s 00:17:11.917 sys 0m35.644s 00:17:11.917 16:09:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:11.917 16:09:41 -- common/autotest_common.sh@10 -- # set +x 00:17:11.917 16:09:41 -- nvmf/nvmf.sh@67 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:17:11.917 16:09:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:11.917 16:09:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:11.917 16:09:41 -- common/autotest_common.sh@10 -- # set +x 00:17:11.917 ************************************ 00:17:11.917 START TEST nvmf_initiator_timeout 00:17:11.917 ************************************ 00:17:11.917 16:09:41 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:17:11.917 * Looking for test storage... 00:17:11.917 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:11.917 16:09:41 -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:11.917 16:09:41 -- nvmf/common.sh@7 -- # uname -s 00:17:11.917 16:09:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:11.917 16:09:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:11.917 16:09:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:11.917 16:09:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:11.917 16:09:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:11.917 16:09:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:11.917 16:09:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:11.917 16:09:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:11.917 16:09:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:11.917 16:09:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:12.175 16:09:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5cf30adc-e0ee-4255-9853-178d7d30983a 00:17:12.175 16:09:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=5cf30adc-e0ee-4255-9853-178d7d30983a 00:17:12.175 16:09:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:12.175 16:09:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:12.175 16:09:41 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:12.175 16:09:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:12.175 16:09:41 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:12.175 16:09:41 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:12.175 16:09:41 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:12.175 16:09:41 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:12.175 16:09:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.175 16:09:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.175 16:09:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.175 16:09:41 -- paths/export.sh@5 -- # export PATH 00:17:12.175 16:09:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.175 16:09:41 -- nvmf/common.sh@47 -- # : 0 00:17:12.175 16:09:41 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:12.175 16:09:41 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:12.175 16:09:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:12.175 16:09:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:12.175 16:09:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:12.175 16:09:41 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:12.175 16:09:41 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:12.175 16:09:41 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:12.175 16:09:41 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:12.175 16:09:41 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:12.175 16:09:41 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:17:12.175 16:09:41 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:12.176 16:09:41 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:12.176 16:09:41 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:12.176 16:09:41 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:12.176 16:09:41 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:12.176 16:09:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:12.176 16:09:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:12.176 16:09:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:12.176 16:09:41 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:17:12.176 16:09:41 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:17:12.176 16:09:41 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:17:12.176 16:09:41 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:17:12.176 16:09:41 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:17:12.176 16:09:41 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:17:12.176 16:09:41 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:12.176 16:09:41 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:12.176 16:09:41 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:12.176 16:09:41 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:12.176 16:09:41 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:12.176 16:09:41 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:12.176 16:09:41 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:12.176 16:09:41 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:12.176 16:09:41 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:12.176 16:09:41 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:12.176 16:09:41 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:12.176 16:09:41 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:12.176 16:09:41 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:12.176 16:09:41 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:12.176 Cannot find device "nvmf_tgt_br" 00:17:12.176 16:09:41 -- nvmf/common.sh@155 -- # true 00:17:12.176 16:09:41 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:12.176 Cannot find device "nvmf_tgt_br2" 00:17:12.176 16:09:41 -- nvmf/common.sh@156 -- # true 00:17:12.176 16:09:41 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:12.176 16:09:41 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:12.176 Cannot find device "nvmf_tgt_br" 00:17:12.176 16:09:41 -- nvmf/common.sh@158 -- # true 00:17:12.176 16:09:41 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:12.176 Cannot find device "nvmf_tgt_br2" 00:17:12.176 16:09:42 -- nvmf/common.sh@159 -- # true 00:17:12.176 16:09:42 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:12.176 16:09:42 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:12.176 16:09:42 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:12.176 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:12.176 16:09:42 -- nvmf/common.sh@162 -- # true 00:17:12.176 16:09:42 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:12.176 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:12.176 16:09:42 -- nvmf/common.sh@163 -- # true 00:17:12.176 16:09:42 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:12.176 16:09:42 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:12.176 16:09:42 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:12.176 16:09:42 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:12.176 16:09:42 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:12.176 16:09:42 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:12.454 16:09:42 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:12.454 16:09:42 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:12.454 16:09:42 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:12.454 16:09:42 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:12.454 16:09:42 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:12.454 16:09:42 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:12.454 16:09:42 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:12.454 16:09:42 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:12.454 16:09:42 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:12.454 16:09:42 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:12.454 16:09:42 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:12.454 16:09:42 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:12.454 16:09:42 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:12.454 16:09:42 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:12.454 16:09:42 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:12.454 16:09:42 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:12.454 16:09:42 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:12.454 16:09:42 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:12.454 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:12.454 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:17:12.454 00:17:12.454 --- 10.0.0.2 ping statistics --- 00:17:12.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:12.454 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:17:12.454 16:09:42 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:12.454 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:12.454 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:17:12.454 00:17:12.454 --- 10.0.0.3 ping statistics --- 00:17:12.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:12.454 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:17:12.454 16:09:42 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:12.454 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:12.454 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:17:12.454 00:17:12.454 --- 10.0.0.1 ping statistics --- 00:17:12.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:12.454 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:17:12.454 16:09:42 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:12.454 16:09:42 -- nvmf/common.sh@422 -- # return 0 00:17:12.454 16:09:42 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:12.454 16:09:42 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:12.454 16:09:42 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:12.454 16:09:42 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:12.454 16:09:42 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:12.454 16:09:42 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:12.454 16:09:42 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:12.454 16:09:42 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:17:12.454 16:09:42 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:12.454 16:09:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:12.454 16:09:42 -- common/autotest_common.sh@10 -- # set +x 00:17:12.454 16:09:42 -- nvmf/common.sh@470 -- # nvmfpid=85151 00:17:12.454 16:09:42 -- nvmf/common.sh@471 -- # waitforlisten 85151 00:17:12.454 16:09:42 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:12.454 16:09:42 -- common/autotest_common.sh@817 -- # '[' -z 85151 ']' 00:17:12.454 16:09:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:12.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:12.454 16:09:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:12.454 16:09:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:12.454 16:09:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:12.454 16:09:42 -- common/autotest_common.sh@10 -- # set +x 00:17:12.454 [2024-04-15 16:09:42.412748] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:17:12.454 [2024-04-15 16:09:42.413053] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:12.734 [2024-04-15 16:09:42.560856] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:12.734 [2024-04-15 16:09:42.616647] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:12.734 [2024-04-15 16:09:42.616914] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:12.734 [2024-04-15 16:09:42.617081] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:12.734 [2024-04-15 16:09:42.617213] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:12.735 [2024-04-15 16:09:42.617332] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:12.735 [2024-04-15 16:09:42.617545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:12.735 [2024-04-15 16:09:42.617766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:12.735 [2024-04-15 16:09:42.617769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.735 [2024-04-15 16:09:42.617717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:13.670 16:09:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:13.670 16:09:43 -- common/autotest_common.sh@850 -- # return 0 00:17:13.670 16:09:43 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:13.670 16:09:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:13.670 16:09:43 -- common/autotest_common.sh@10 -- # set +x 00:17:13.670 16:09:43 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:13.670 16:09:43 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:13.670 16:09:43 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:13.670 16:09:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:13.670 16:09:43 -- common/autotest_common.sh@10 -- # set +x 00:17:13.670 Malloc0 00:17:13.670 16:09:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:13.670 16:09:43 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:17:13.670 16:09:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:13.670 16:09:43 -- common/autotest_common.sh@10 -- # set +x 00:17:13.670 Delay0 00:17:13.670 16:09:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:13.670 16:09:43 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:13.670 16:09:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:13.670 16:09:43 -- common/autotest_common.sh@10 -- # set +x 00:17:13.670 [2024-04-15 16:09:43.470742] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:13.670 16:09:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:13.670 16:09:43 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:13.670 16:09:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:13.670 16:09:43 -- common/autotest_common.sh@10 -- # set +x 00:17:13.670 16:09:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:13.670 16:09:43 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:13.670 16:09:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:13.670 16:09:43 -- common/autotest_common.sh@10 -- # set +x 00:17:13.670 16:09:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:13.670 16:09:43 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:13.670 16:09:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:13.670 16:09:43 -- common/autotest_common.sh@10 -- # set +x 00:17:13.670 [2024-04-15 16:09:43.506911] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:13.670 16:09:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:13.670 16:09:43 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5cf30adc-e0ee-4255-9853-178d7d30983a --hostid=5cf30adc-e0ee-4255-9853-178d7d30983a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:13.929 16:09:43 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:17:13.929 16:09:43 -- common/autotest_common.sh@1184 -- # local i=0 00:17:13.929 16:09:43 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:17:13.929 16:09:43 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:17:13.929 16:09:43 -- common/autotest_common.sh@1191 -- # sleep 2 00:17:15.847 16:09:45 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:17:15.847 16:09:45 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:17:15.847 16:09:45 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:17:15.847 16:09:45 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:17:15.847 16:09:45 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:17:15.847 16:09:45 -- common/autotest_common.sh@1194 -- # return 0 00:17:15.847 16:09:45 -- target/initiator_timeout.sh@35 -- # fio_pid=85211 00:17:15.847 16:09:45 -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:17:15.847 16:09:45 -- target/initiator_timeout.sh@37 -- # sleep 3 00:17:15.847 [global] 00:17:15.847 thread=1 00:17:15.847 invalidate=1 00:17:15.847 rw=write 00:17:15.847 time_based=1 00:17:15.847 runtime=60 00:17:15.847 ioengine=libaio 00:17:15.847 direct=1 00:17:15.847 bs=4096 00:17:15.847 iodepth=1 00:17:15.847 norandommap=0 00:17:15.847 numjobs=1 00:17:15.847 00:17:15.847 verify_dump=1 00:17:15.847 verify_backlog=512 00:17:15.847 verify_state_save=0 00:17:15.847 do_verify=1 00:17:15.847 verify=crc32c-intel 00:17:15.847 [job0] 00:17:15.847 filename=/dev/nvme0n1 00:17:15.847 Could not set queue depth (nvme0n1) 00:17:16.111 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:16.111 fio-3.35 00:17:16.111 Starting 1 thread 00:17:19.394 16:09:48 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:17:19.394 16:09:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:19.394 16:09:48 -- common/autotest_common.sh@10 -- # set +x 00:17:19.394 true 00:17:19.394 16:09:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:19.394 16:09:48 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:17:19.394 16:09:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:19.394 16:09:48 -- common/autotest_common.sh@10 -- # set +x 00:17:19.394 true 00:17:19.394 16:09:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:19.394 16:09:48 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:17:19.394 16:09:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:19.394 16:09:48 -- common/autotest_common.sh@10 -- # set +x 00:17:19.394 true 00:17:19.394 16:09:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:19.394 16:09:48 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:17:19.394 16:09:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:19.394 16:09:48 -- common/autotest_common.sh@10 -- # set +x 00:17:19.394 true 00:17:19.394 16:09:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:19.394 16:09:48 -- target/initiator_timeout.sh@45 -- # sleep 3 00:17:21.939 16:09:51 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:17:21.939 16:09:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:21.939 16:09:51 -- common/autotest_common.sh@10 -- # set +x 00:17:21.939 true 00:17:21.939 16:09:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:21.939 16:09:51 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:17:21.939 16:09:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:21.939 16:09:51 -- common/autotest_common.sh@10 -- # set +x 00:17:21.939 true 00:17:21.939 16:09:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:21.939 16:09:51 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:17:21.939 16:09:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:21.939 16:09:51 -- common/autotest_common.sh@10 -- # set +x 00:17:21.939 true 00:17:21.939 16:09:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:21.939 16:09:51 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:17:21.939 16:09:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:21.939 16:09:51 -- common/autotest_common.sh@10 -- # set +x 00:17:21.939 true 00:17:21.939 16:09:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:21.939 16:09:51 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:17:21.939 16:09:51 -- target/initiator_timeout.sh@54 -- # wait 85211 00:18:18.150 00:18:18.150 job0: (groupid=0, jobs=1): err= 0: pid=85237: Mon Apr 15 16:10:45 2024 00:18:18.150 read: IOPS=861, BW=3447KiB/s (3530kB/s)(202MiB/60000msec) 00:18:18.150 slat (usec): min=7, max=14759, avg=11.49, stdev=73.23 00:18:18.150 clat (usec): min=16, max=40679k, avg=980.42, stdev=178885.44 00:18:18.150 lat (usec): min=146, max=40679k, avg=991.90, stdev=178885.44 00:18:18.150 clat percentiles (usec): 00:18:18.150 | 1.00th=[ 157], 5.00th=[ 165], 10.00th=[ 172], 20.00th=[ 178], 00:18:18.150 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 192], 60.00th=[ 196], 00:18:18.150 | 70.00th=[ 200], 80.00th=[ 208], 90.00th=[ 219], 95.00th=[ 229], 00:18:18.150 | 99.00th=[ 262], 99.50th=[ 281], 99.90th=[ 570], 99.95th=[ 701], 00:18:18.150 | 99.99th=[ 848] 00:18:18.150 write: IOPS=869, BW=3479KiB/s (3562kB/s)(204MiB/60000msec); 0 zone resets 00:18:18.150 slat (usec): min=9, max=824, avg=16.74, stdev= 7.28 00:18:18.150 clat (usec): min=4, max=2527, avg=147.76, stdev=33.65 00:18:18.150 lat (usec): min=120, max=2551, avg=164.51, stdev=34.81 00:18:18.150 clat percentiles (usec): 00:18:18.150 | 1.00th=[ 119], 5.00th=[ 125], 10.00th=[ 129], 20.00th=[ 135], 00:18:18.150 | 30.00th=[ 139], 40.00th=[ 143], 50.00th=[ 145], 60.00th=[ 149], 00:18:18.150 | 70.00th=[ 153], 80.00th=[ 159], 90.00th=[ 167], 95.00th=[ 176], 00:18:18.150 | 99.00th=[ 200], 99.50th=[ 219], 99.90th=[ 529], 99.95th=[ 660], 00:18:18.150 | 99.99th=[ 1696] 00:18:18.150 bw ( KiB/s): min= 1864, max=12288, per=100.00%, avg=10397.54, stdev=2076.74, samples=39 00:18:18.150 iops : min= 466, max= 3072, avg=2599.38, stdev=519.19, samples=39 00:18:18.150 lat (usec) : 10=0.01%, 20=0.01%, 100=0.01%, 250=98.96%, 500=0.90% 00:18:18.150 lat (usec) : 750=0.10%, 1000=0.02% 00:18:18.150 lat (msec) : 2=0.01%, 4=0.01%, >=2000=0.01% 00:18:18.150 cpu : usr=0.52%, sys=1.90%, ctx=103930, majf=0, minf=2 00:18:18.150 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:18.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.150 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.150 issued rwts: total=51712,52182,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:18.150 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:18.150 00:18:18.150 Run status group 0 (all jobs): 00:18:18.150 READ: bw=3447KiB/s (3530kB/s), 3447KiB/s-3447KiB/s (3530kB/s-3530kB/s), io=202MiB (212MB), run=60000-60000msec 00:18:18.150 WRITE: bw=3479KiB/s (3562kB/s), 3479KiB/s-3479KiB/s (3562kB/s-3562kB/s), io=204MiB (214MB), run=60000-60000msec 00:18:18.150 00:18:18.150 Disk stats (read/write): 00:18:18.150 nvme0n1: ios=51832/51712, merge=0/0, ticks=10253/7951, in_queue=18204, util=99.85% 00:18:18.150 16:10:45 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:18.150 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:18.150 16:10:46 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:18.150 16:10:46 -- common/autotest_common.sh@1205 -- # local i=0 00:18:18.150 16:10:46 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:18:18.150 16:10:46 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:18.150 16:10:46 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:18:18.150 16:10:46 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:18.150 nvmf hotplug test: fio successful as expected 00:18:18.150 16:10:46 -- common/autotest_common.sh@1217 -- # return 0 00:18:18.150 16:10:46 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:18:18.150 16:10:46 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:18:18.150 16:10:46 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:18.150 16:10:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:18.150 16:10:46 -- common/autotest_common.sh@10 -- # set +x 00:18:18.150 16:10:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:18.150 16:10:46 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:18:18.150 16:10:46 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:18:18.150 16:10:46 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:18:18.150 16:10:46 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:18.150 16:10:46 -- nvmf/common.sh@117 -- # sync 00:18:18.150 16:10:46 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:18.150 16:10:46 -- nvmf/common.sh@120 -- # set +e 00:18:18.150 16:10:46 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:18.150 16:10:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:18.150 rmmod nvme_tcp 00:18:18.150 rmmod nvme_fabrics 00:18:18.150 16:10:46 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:18.150 16:10:46 -- nvmf/common.sh@124 -- # set -e 00:18:18.150 16:10:46 -- nvmf/common.sh@125 -- # return 0 00:18:18.150 16:10:46 -- nvmf/common.sh@478 -- # '[' -n 85151 ']' 00:18:18.150 16:10:46 -- nvmf/common.sh@479 -- # killprocess 85151 00:18:18.150 16:10:46 -- common/autotest_common.sh@936 -- # '[' -z 85151 ']' 00:18:18.150 16:10:46 -- common/autotest_common.sh@940 -- # kill -0 85151 00:18:18.150 16:10:46 -- common/autotest_common.sh@941 -- # uname 00:18:18.150 16:10:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:18.150 16:10:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85151 00:18:18.150 16:10:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:18.150 16:10:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:18.150 16:10:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85151' 00:18:18.150 killing process with pid 85151 00:18:18.150 16:10:46 -- common/autotest_common.sh@955 -- # kill 85151 00:18:18.150 16:10:46 -- common/autotest_common.sh@960 -- # wait 85151 00:18:18.150 16:10:46 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:18.150 16:10:46 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:18.150 16:10:46 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:18.150 16:10:46 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:18.150 16:10:46 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:18.150 16:10:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:18.150 16:10:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:18.150 16:10:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:18.150 16:10:46 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:18.150 ************************************ 00:18:18.150 END TEST nvmf_initiator_timeout 00:18:18.150 ************************************ 00:18:18.150 00:18:18.150 real 1m4.609s 00:18:18.150 user 3m49.844s 00:18:18.150 sys 0m25.067s 00:18:18.150 16:10:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:18.150 16:10:46 -- common/autotest_common.sh@10 -- # set +x 00:18:18.150 16:10:46 -- nvmf/nvmf.sh@70 -- # [[ virt == phy ]] 00:18:18.150 16:10:46 -- nvmf/nvmf.sh@84 -- # timing_exit target 00:18:18.150 16:10:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:18.150 16:10:46 -- common/autotest_common.sh@10 -- # set +x 00:18:18.150 16:10:46 -- nvmf/nvmf.sh@86 -- # timing_enter host 00:18:18.151 16:10:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:18.151 16:10:46 -- common/autotest_common.sh@10 -- # set +x 00:18:18.151 16:10:46 -- nvmf/nvmf.sh@88 -- # [[ 1 -eq 0 ]] 00:18:18.151 16:10:46 -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:18:18.151 16:10:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:18.151 16:10:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:18.151 16:10:46 -- common/autotest_common.sh@10 -- # set +x 00:18:18.151 ************************************ 00:18:18.151 START TEST nvmf_identify 00:18:18.151 ************************************ 00:18:18.151 16:10:46 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:18:18.151 * Looking for test storage... 00:18:18.151 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:18.151 16:10:46 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:18.151 16:10:46 -- nvmf/common.sh@7 -- # uname -s 00:18:18.151 16:10:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:18.151 16:10:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:18.151 16:10:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:18.151 16:10:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:18.151 16:10:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:18.151 16:10:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:18.151 16:10:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:18.151 16:10:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:18.151 16:10:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:18.151 16:10:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:18.151 16:10:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5cf30adc-e0ee-4255-9853-178d7d30983a 00:18:18.151 16:10:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=5cf30adc-e0ee-4255-9853-178d7d30983a 00:18:18.151 16:10:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:18.151 16:10:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:18.151 16:10:46 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:18.151 16:10:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:18.151 16:10:46 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:18.151 16:10:46 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:18.151 16:10:46 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:18.151 16:10:46 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:18.151 16:10:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.151 16:10:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.151 16:10:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.151 16:10:46 -- paths/export.sh@5 -- # export PATH 00:18:18.151 16:10:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.151 16:10:46 -- nvmf/common.sh@47 -- # : 0 00:18:18.151 16:10:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:18.151 16:10:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:18.151 16:10:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:18.151 16:10:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:18.151 16:10:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:18.151 16:10:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:18.151 16:10:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:18.151 16:10:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:18.151 16:10:46 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:18.151 16:10:46 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:18.151 16:10:46 -- host/identify.sh@14 -- # nvmftestinit 00:18:18.151 16:10:46 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:18.151 16:10:46 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:18.151 16:10:46 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:18.151 16:10:46 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:18.151 16:10:46 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:18.151 16:10:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:18.151 16:10:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:18.151 16:10:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:18.151 16:10:46 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:18:18.151 16:10:46 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:18:18.151 16:10:46 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:18:18.151 16:10:46 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:18:18.151 16:10:46 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:18:18.151 16:10:46 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:18:18.151 16:10:46 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:18.151 16:10:46 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:18.151 16:10:46 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:18.151 16:10:46 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:18.151 16:10:46 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:18.151 16:10:46 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:18.151 16:10:46 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:18.151 16:10:46 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:18.151 16:10:46 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:18.151 16:10:46 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:18.151 16:10:46 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:18.151 16:10:46 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:18.151 16:10:46 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:18.151 16:10:46 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:18.151 Cannot find device "nvmf_tgt_br" 00:18:18.151 16:10:46 -- nvmf/common.sh@155 -- # true 00:18:18.151 16:10:46 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:18.151 Cannot find device "nvmf_tgt_br2" 00:18:18.151 16:10:46 -- nvmf/common.sh@156 -- # true 00:18:18.151 16:10:46 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:18.151 16:10:46 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:18.151 Cannot find device "nvmf_tgt_br" 00:18:18.151 16:10:46 -- nvmf/common.sh@158 -- # true 00:18:18.151 16:10:46 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:18.151 Cannot find device "nvmf_tgt_br2" 00:18:18.151 16:10:46 -- nvmf/common.sh@159 -- # true 00:18:18.151 16:10:46 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:18.151 16:10:46 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:18.151 16:10:46 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:18.151 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:18.151 16:10:46 -- nvmf/common.sh@162 -- # true 00:18:18.151 16:10:46 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:18.151 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:18.151 16:10:46 -- nvmf/common.sh@163 -- # true 00:18:18.151 16:10:46 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:18.151 16:10:46 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:18.151 16:10:46 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:18.151 16:10:46 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:18.151 16:10:46 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:18.151 16:10:46 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:18.151 16:10:46 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:18.151 16:10:46 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:18.151 16:10:46 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:18.151 16:10:46 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:18.151 16:10:46 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:18.151 16:10:46 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:18.151 16:10:46 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:18.151 16:10:46 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:18.151 16:10:46 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:18.151 16:10:46 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:18.151 16:10:47 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:18.151 16:10:47 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:18.151 16:10:47 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:18.151 16:10:47 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:18.151 16:10:47 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:18.151 16:10:47 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:18.151 16:10:47 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:18.151 16:10:47 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:18.151 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:18.151 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:18:18.151 00:18:18.151 --- 10.0.0.2 ping statistics --- 00:18:18.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:18.151 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:18:18.151 16:10:47 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:18.151 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:18.151 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:18:18.151 00:18:18.151 --- 10.0.0.3 ping statistics --- 00:18:18.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:18.152 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:18:18.152 16:10:47 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:18.152 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:18.152 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:18:18.152 00:18:18.152 --- 10.0.0.1 ping statistics --- 00:18:18.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:18.152 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:18:18.152 16:10:47 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:18.152 16:10:47 -- nvmf/common.sh@422 -- # return 0 00:18:18.152 16:10:47 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:18.152 16:10:47 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:18.152 16:10:47 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:18.152 16:10:47 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:18.152 16:10:47 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:18.152 16:10:47 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:18.152 16:10:47 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:18.152 16:10:47 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:18:18.152 16:10:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:18.152 16:10:47 -- common/autotest_common.sh@10 -- # set +x 00:18:18.152 16:10:47 -- host/identify.sh@19 -- # nvmfpid=86079 00:18:18.152 16:10:47 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:18.152 16:10:47 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:18.152 16:10:47 -- host/identify.sh@23 -- # waitforlisten 86079 00:18:18.152 16:10:47 -- common/autotest_common.sh@817 -- # '[' -z 86079 ']' 00:18:18.152 16:10:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:18.152 16:10:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:18.152 16:10:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:18.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:18.152 16:10:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:18.152 16:10:47 -- common/autotest_common.sh@10 -- # set +x 00:18:18.152 [2024-04-15 16:10:47.193516] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:18:18.152 [2024-04-15 16:10:47.193848] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:18.152 [2024-04-15 16:10:47.340163] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:18.152 [2024-04-15 16:10:47.411886] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:18.152 [2024-04-15 16:10:47.412156] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:18.152 [2024-04-15 16:10:47.412302] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:18.152 [2024-04-15 16:10:47.412495] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:18.152 [2024-04-15 16:10:47.412539] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:18.152 [2024-04-15 16:10:47.413071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:18.152 [2024-04-15 16:10:47.413194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:18.152 [2024-04-15 16:10:47.413286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:18.152 [2024-04-15 16:10:47.413283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:18.409 16:10:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:18.409 16:10:48 -- common/autotest_common.sh@850 -- # return 0 00:18:18.409 16:10:48 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:18.409 16:10:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:18.409 16:10:48 -- common/autotest_common.sh@10 -- # set +x 00:18:18.409 [2024-04-15 16:10:48.155567] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:18.409 16:10:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:18.409 16:10:48 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:18:18.409 16:10:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:18.409 16:10:48 -- common/autotest_common.sh@10 -- # set +x 00:18:18.409 16:10:48 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:18.409 16:10:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:18.409 16:10:48 -- common/autotest_common.sh@10 -- # set +x 00:18:18.409 Malloc0 00:18:18.409 16:10:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:18.409 16:10:48 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:18.409 16:10:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:18.409 16:10:48 -- common/autotest_common.sh@10 -- # set +x 00:18:18.409 16:10:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:18.409 16:10:48 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:18:18.409 16:10:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:18.409 16:10:48 -- common/autotest_common.sh@10 -- # set +x 00:18:18.409 16:10:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:18.409 16:10:48 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:18.409 16:10:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:18.409 16:10:48 -- common/autotest_common.sh@10 -- # set +x 00:18:18.409 [2024-04-15 16:10:48.243728] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:18.409 16:10:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:18.409 16:10:48 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:18.409 16:10:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:18.409 16:10:48 -- common/autotest_common.sh@10 -- # set +x 00:18:18.409 16:10:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:18.409 16:10:48 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:18:18.409 16:10:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:18.409 16:10:48 -- common/autotest_common.sh@10 -- # set +x 00:18:18.409 [2024-04-15 16:10:48.259481] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:18:18.409 [ 00:18:18.409 { 00:18:18.409 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:18.409 "subtype": "Discovery", 00:18:18.409 "listen_addresses": [ 00:18:18.409 { 00:18:18.409 "transport": "TCP", 00:18:18.409 "trtype": "TCP", 00:18:18.409 "adrfam": "IPv4", 00:18:18.409 "traddr": "10.0.0.2", 00:18:18.409 "trsvcid": "4420" 00:18:18.409 } 00:18:18.409 ], 00:18:18.409 "allow_any_host": true, 00:18:18.409 "hosts": [] 00:18:18.409 }, 00:18:18.409 { 00:18:18.409 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:18.409 "subtype": "NVMe", 00:18:18.409 "listen_addresses": [ 00:18:18.409 { 00:18:18.409 "transport": "TCP", 00:18:18.409 "trtype": "TCP", 00:18:18.409 "adrfam": "IPv4", 00:18:18.410 "traddr": "10.0.0.2", 00:18:18.410 "trsvcid": "4420" 00:18:18.410 } 00:18:18.410 ], 00:18:18.410 "allow_any_host": true, 00:18:18.410 "hosts": [], 00:18:18.410 "serial_number": "SPDK00000000000001", 00:18:18.410 "model_number": "SPDK bdev Controller", 00:18:18.410 "max_namespaces": 32, 00:18:18.410 "min_cntlid": 1, 00:18:18.410 "max_cntlid": 65519, 00:18:18.410 "namespaces": [ 00:18:18.410 { 00:18:18.410 "nsid": 1, 00:18:18.410 "bdev_name": "Malloc0", 00:18:18.410 "name": "Malloc0", 00:18:18.410 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:18:18.410 "eui64": "ABCDEF0123456789", 00:18:18.410 "uuid": "65ec005f-f06b-4253-b77f-ab208917c20c" 00:18:18.410 } 00:18:18.410 ] 00:18:18.410 } 00:18:18.410 ] 00:18:18.410 16:10:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:18.410 16:10:48 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:18:18.410 [2024-04-15 16:10:48.291949] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:18:18.410 [2024-04-15 16:10:48.292199] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86120 ] 00:18:18.673 [2024-04-15 16:10:48.422053] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:18:18.673 [2024-04-15 16:10:48.422128] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:18:18.673 [2024-04-15 16:10:48.422135] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:18:18.673 [2024-04-15 16:10:48.422149] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:18:18.673 [2024-04-15 16:10:48.422164] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:18:18.673 [2024-04-15 16:10:48.422322] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:18:18.673 [2024-04-15 16:10:48.422368] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1eac990 0 00:18:18.673 [2024-04-15 16:10:48.436614] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:18:18.673 [2024-04-15 16:10:48.436643] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:18:18.673 [2024-04-15 16:10:48.436650] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:18:18.674 [2024-04-15 16:10:48.436654] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:18:18.674 [2024-04-15 16:10:48.436731] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.674 [2024-04-15 16:10:48.436737] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.674 [2024-04-15 16:10:48.436743] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1eac990) 00:18:18.674 [2024-04-15 16:10:48.436761] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:18:18.674 [2024-04-15 16:10:48.436803] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3230, cid 0, qid 0 00:18:18.674 [2024-04-15 16:10:48.462607] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.674 [2024-04-15 16:10:48.462635] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.674 [2024-04-15 16:10:48.462640] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.674 [2024-04-15 16:10:48.462646] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3230) on tqpair=0x1eac990 00:18:18.674 [2024-04-15 16:10:48.462662] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:18:18.674 [2024-04-15 16:10:48.462673] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:18:18.674 [2024-04-15 16:10:48.462680] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:18:18.674 [2024-04-15 16:10:48.462704] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.674 [2024-04-15 16:10:48.462709] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.674 [2024-04-15 16:10:48.462714] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1eac990) 00:18:18.674 [2024-04-15 16:10:48.462728] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.674 [2024-04-15 16:10:48.462770] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3230, cid 0, qid 0 00:18:18.674 [2024-04-15 16:10:48.462865] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.674 [2024-04-15 16:10:48.462872] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.674 [2024-04-15 16:10:48.462877] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.674 [2024-04-15 16:10:48.462882] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3230) on tqpair=0x1eac990 00:18:18.674 [2024-04-15 16:10:48.462894] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:18:18.674 [2024-04-15 16:10:48.462903] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:18:18.674 [2024-04-15 16:10:48.462911] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.674 [2024-04-15 16:10:48.462916] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.674 [2024-04-15 16:10:48.462921] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1eac990) 00:18:18.674 [2024-04-15 16:10:48.462929] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.674 [2024-04-15 16:10:48.462945] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3230, cid 0, qid 0 00:18:18.674 [2024-04-15 16:10:48.462992] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.674 [2024-04-15 16:10:48.462999] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.674 [2024-04-15 16:10:48.463004] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.674 [2024-04-15 16:10:48.463008] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3230) on tqpair=0x1eac990 00:18:18.674 [2024-04-15 16:10:48.463016] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:18:18.674 [2024-04-15 16:10:48.463027] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:18:18.674 [2024-04-15 16:10:48.463034] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.674 [2024-04-15 16:10:48.463039] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.674 [2024-04-15 16:10:48.463044] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1eac990) 00:18:18.674 [2024-04-15 16:10:48.463052] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.674 [2024-04-15 16:10:48.463067] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3230, cid 0, qid 0 00:18:18.674 [2024-04-15 16:10:48.463111] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.674 [2024-04-15 16:10:48.463118] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.674 [2024-04-15 16:10:48.463122] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.674 [2024-04-15 16:10:48.463127] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3230) on tqpair=0x1eac990 00:18:18.674 [2024-04-15 16:10:48.463135] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:18.674 [2024-04-15 16:10:48.463145] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.674 [2024-04-15 16:10:48.463150] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.674 [2024-04-15 16:10:48.463155] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1eac990) 00:18:18.674 [2024-04-15 16:10:48.463162] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.674 [2024-04-15 16:10:48.463177] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3230, cid 0, qid 0 00:18:18.674 [2024-04-15 16:10:48.463227] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.674 [2024-04-15 16:10:48.463234] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.674 [2024-04-15 16:10:48.463238] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.674 [2024-04-15 16:10:48.463244] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3230) on tqpair=0x1eac990 00:18:18.674 [2024-04-15 16:10:48.463250] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:18:18.674 [2024-04-15 16:10:48.463257] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:18:18.674 [2024-04-15 16:10:48.463266] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:18.674 [2024-04-15 16:10:48.463373] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:18:18.674 [2024-04-15 16:10:48.463379] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:18.674 [2024-04-15 16:10:48.463390] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.674 [2024-04-15 16:10:48.463394] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.674 [2024-04-15 16:10:48.463399] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1eac990) 00:18:18.674 [2024-04-15 16:10:48.463407] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.674 [2024-04-15 16:10:48.463422] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3230, cid 0, qid 0 00:18:18.674 [2024-04-15 16:10:48.463464] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.674 [2024-04-15 16:10:48.463472] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.674 [2024-04-15 16:10:48.463476] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.674 [2024-04-15 16:10:48.463481] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3230) on tqpair=0x1eac990 00:18:18.674 [2024-04-15 16:10:48.463488] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:18.674 [2024-04-15 16:10:48.463498] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.674 [2024-04-15 16:10:48.463503] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.674 [2024-04-15 16:10:48.463508] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1eac990) 00:18:18.674 [2024-04-15 16:10:48.463516] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.674 [2024-04-15 16:10:48.463530] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3230, cid 0, qid 0 00:18:18.674 [2024-04-15 16:10:48.463587] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.674 [2024-04-15 16:10:48.463595] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.674 [2024-04-15 16:10:48.463599] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.674 [2024-04-15 16:10:48.463604] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3230) on tqpair=0x1eac990 00:18:18.674 [2024-04-15 16:10:48.463611] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:18.674 [2024-04-15 16:10:48.463617] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:18:18.674 [2024-04-15 16:10:48.463626] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:18:18.674 [2024-04-15 16:10:48.463637] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:18:18.674 [2024-04-15 16:10:48.463648] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.674 [2024-04-15 16:10:48.463653] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1eac990) 00:18:18.674 [2024-04-15 16:10:48.463661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.674 [2024-04-15 16:10:48.463677] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3230, cid 0, qid 0 00:18:18.674 [2024-04-15 16:10:48.463771] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:18.674 [2024-04-15 16:10:48.463778] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:18.674 [2024-04-15 16:10:48.463782] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:18.674 [2024-04-15 16:10:48.463788] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1eac990): datao=0, datal=4096, cccid=0 00:18:18.674 [2024-04-15 16:10:48.463794] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ee3230) on tqpair(0x1eac990): expected_datao=0, payload_size=4096 00:18:18.674 [2024-04-15 16:10:48.463800] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.674 [2024-04-15 16:10:48.463810] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:18.674 [2024-04-15 16:10:48.463815] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:18.674 [2024-04-15 16:10:48.463825] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.674 [2024-04-15 16:10:48.463832] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.674 [2024-04-15 16:10:48.463836] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.674 [2024-04-15 16:10:48.463841] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3230) on tqpair=0x1eac990 00:18:18.675 [2024-04-15 16:10:48.463853] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:18:18.675 [2024-04-15 16:10:48.463859] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:18:18.675 [2024-04-15 16:10:48.463865] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:18:18.675 [2024-04-15 16:10:48.463875] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:18:18.675 [2024-04-15 16:10:48.463881] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:18:18.675 [2024-04-15 16:10:48.463888] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:18:18.675 [2024-04-15 16:10:48.463898] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:18:18.675 [2024-04-15 16:10:48.463906] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.675 [2024-04-15 16:10:48.463911] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.675 [2024-04-15 16:10:48.463916] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1eac990) 00:18:18.675 [2024-04-15 16:10:48.463924] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:18.675 [2024-04-15 16:10:48.463940] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3230, cid 0, qid 0 00:18:18.675 [2024-04-15 16:10:48.463988] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.675 [2024-04-15 16:10:48.463995] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.675 [2024-04-15 16:10:48.464000] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.675 [2024-04-15 16:10:48.464004] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3230) on tqpair=0x1eac990 00:18:18.675 [2024-04-15 16:10:48.464015] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.675 [2024-04-15 16:10:48.464019] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.675 [2024-04-15 16:10:48.464024] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1eac990) 00:18:18.675 [2024-04-15 16:10:48.464031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:18.675 [2024-04-15 16:10:48.464039] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.675 [2024-04-15 16:10:48.464044] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.675 [2024-04-15 16:10:48.464049] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1eac990) 00:18:18.675 [2024-04-15 16:10:48.464055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:18.675 [2024-04-15 16:10:48.464063] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.675 [2024-04-15 16:10:48.464068] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.675 [2024-04-15 16:10:48.464072] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1eac990) 00:18:18.675 [2024-04-15 16:10:48.464079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:18.675 [2024-04-15 16:10:48.464087] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.675 [2024-04-15 16:10:48.464091] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.675 [2024-04-15 16:10:48.464096] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.675 [2024-04-15 16:10:48.464103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:18.675 [2024-04-15 16:10:48.464109] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:18:18.675 [2024-04-15 16:10:48.464130] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:18.675 [2024-04-15 16:10:48.464138] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.675 [2024-04-15 16:10:48.464142] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1eac990) 00:18:18.675 [2024-04-15 16:10:48.464150] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.675 [2024-04-15 16:10:48.464168] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3230, cid 0, qid 0 00:18:18.675 [2024-04-15 16:10:48.464174] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3390, cid 1, qid 0 00:18:18.675 [2024-04-15 16:10:48.464180] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee34f0, cid 2, qid 0 00:18:18.675 [2024-04-15 16:10:48.464186] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.675 [2024-04-15 16:10:48.464191] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee37b0, cid 4, qid 0 00:18:18.675 [2024-04-15 16:10:48.464274] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.675 [2024-04-15 16:10:48.464281] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.675 [2024-04-15 16:10:48.464286] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.675 [2024-04-15 16:10:48.464291] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee37b0) on tqpair=0x1eac990 00:18:18.675 [2024-04-15 16:10:48.464298] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:18:18.675 [2024-04-15 16:10:48.464305] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:18:18.675 [2024-04-15 16:10:48.464316] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.675 [2024-04-15 16:10:48.464321] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1eac990) 00:18:18.675 [2024-04-15 16:10:48.464328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.675 [2024-04-15 16:10:48.464344] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee37b0, cid 4, qid 0 00:18:18.675 [2024-04-15 16:10:48.464411] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:18.675 [2024-04-15 16:10:48.464418] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:18.675 [2024-04-15 16:10:48.464423] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:18.675 [2024-04-15 16:10:48.464428] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1eac990): datao=0, datal=4096, cccid=4 00:18:18.675 [2024-04-15 16:10:48.464434] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ee37b0) on tqpair(0x1eac990): expected_datao=0, payload_size=4096 00:18:18.675 [2024-04-15 16:10:48.464439] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.675 [2024-04-15 16:10:48.464447] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:18.675 [2024-04-15 16:10:48.464451] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:18.675 [2024-04-15 16:10:48.464460] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.675 [2024-04-15 16:10:48.464467] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.675 [2024-04-15 16:10:48.464472] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.675 [2024-04-15 16:10:48.464477] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee37b0) on tqpair=0x1eac990 00:18:18.675 [2024-04-15 16:10:48.464492] nvme_ctrlr.c:4036:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:18:18.675 [2024-04-15 16:10:48.464516] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.675 [2024-04-15 16:10:48.464522] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1eac990) 00:18:18.675 [2024-04-15 16:10:48.464529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.675 [2024-04-15 16:10:48.464538] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.675 [2024-04-15 16:10:48.464543] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.675 [2024-04-15 16:10:48.464547] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1eac990) 00:18:18.675 [2024-04-15 16:10:48.464554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:18:18.675 [2024-04-15 16:10:48.464586] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee37b0, cid 4, qid 0 00:18:18.675 [2024-04-15 16:10:48.464593] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3910, cid 5, qid 0 00:18:18.675 [2024-04-15 16:10:48.464717] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:18.675 [2024-04-15 16:10:48.464724] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:18.675 [2024-04-15 16:10:48.464728] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:18.675 [2024-04-15 16:10:48.464733] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1eac990): datao=0, datal=1024, cccid=4 00:18:18.675 [2024-04-15 16:10:48.464739] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ee37b0) on tqpair(0x1eac990): expected_datao=0, payload_size=1024 00:18:18.675 [2024-04-15 16:10:48.464744] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.675 [2024-04-15 16:10:48.464752] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:18.676 [2024-04-15 16:10:48.464757] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:18.676 [2024-04-15 16:10:48.464764] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.676 [2024-04-15 16:10:48.464770] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.676 [2024-04-15 16:10:48.464775] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.676 [2024-04-15 16:10:48.464780] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3910) on tqpair=0x1eac990 00:18:18.676 [2024-04-15 16:10:48.464797] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.676 [2024-04-15 16:10:48.464804] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.676 [2024-04-15 16:10:48.464808] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.676 [2024-04-15 16:10:48.464813] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee37b0) on tqpair=0x1eac990 00:18:18.676 [2024-04-15 16:10:48.464830] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.676 [2024-04-15 16:10:48.464835] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1eac990) 00:18:18.676 [2024-04-15 16:10:48.464842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.676 [2024-04-15 16:10:48.464862] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee37b0, cid 4, qid 0 00:18:18.676 [2024-04-15 16:10:48.464925] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:18.676 [2024-04-15 16:10:48.464932] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:18.676 [2024-04-15 16:10:48.464936] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:18.676 [2024-04-15 16:10:48.464941] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1eac990): datao=0, datal=3072, cccid=4 00:18:18.676 [2024-04-15 16:10:48.464947] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ee37b0) on tqpair(0x1eac990): expected_datao=0, payload_size=3072 00:18:18.676 [2024-04-15 16:10:48.464953] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.676 [2024-04-15 16:10:48.464961] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:18.676 [2024-04-15 16:10:48.464965] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:18.676 [2024-04-15 16:10:48.464974] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.676 [2024-04-15 16:10:48.464981] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.676 [2024-04-15 16:10:48.464986] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.676 [2024-04-15 16:10:48.464991] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee37b0) on tqpair=0x1eac990 00:18:18.676 [2024-04-15 16:10:48.465001] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.676 [2024-04-15 16:10:48.465006] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1eac990) 00:18:18.676 [2024-04-15 16:10:48.465013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.676 [2024-04-15 16:10:48.465033] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee37b0, cid 4, qid 0 00:18:18.676 [2024-04-15 16:10:48.465098] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:18.676 [2024-04-15 16:10:48.465105] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:18.676 [2024-04-15 16:10:48.465110] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:18.676 [2024-04-15 16:10:48.465114] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1eac990): datao=0, datal=8, cccid=4 00:18:18.676 [2024-04-15 16:10:48.465120] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ee37b0) on tqpair(0x1eac990): expected_datao=0, payload_size=8 00:18:18.676 [2024-04-15 16:10:48.465126] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.676 [2024-04-15 16:10:48.465133] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:18.676 [2024-04-15 16:10:48.465138] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:18.676 [2024-04-15 16:10:48.465152] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.676 [2024-04-15 16:10:48.465159] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.676 [2024-04-15 16:10:48.465164] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.676 [2024-04-15 16:10:48.465168] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee37b0) on tqpair=0x1eac990 00:18:18.676 ===================================================== 00:18:18.676 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:18:18.676 ===================================================== 00:18:18.676 Controller Capabilities/Features 00:18:18.676 ================================ 00:18:18.676 Vendor ID: 0000 00:18:18.676 Subsystem Vendor ID: 0000 00:18:18.676 Serial Number: .................... 00:18:18.676 Model Number: ........................................ 00:18:18.676 Firmware Version: 24.05 00:18:18.676 Recommended Arb Burst: 0 00:18:18.676 IEEE OUI Identifier: 00 00 00 00:18:18.676 Multi-path I/O 00:18:18.676 May have multiple subsystem ports: No 00:18:18.676 May have multiple controllers: No 00:18:18.676 Associated with SR-IOV VF: No 00:18:18.676 Max Data Transfer Size: 131072 00:18:18.676 Max Number of Namespaces: 0 00:18:18.676 Max Number of I/O Queues: 1024 00:18:18.676 NVMe Specification Version (VS): 1.3 00:18:18.676 NVMe Specification Version (Identify): 1.3 00:18:18.676 Maximum Queue Entries: 128 00:18:18.676 Contiguous Queues Required: Yes 00:18:18.676 Arbitration Mechanisms Supported 00:18:18.676 Weighted Round Robin: Not Supported 00:18:18.676 Vendor Specific: Not Supported 00:18:18.676 Reset Timeout: 15000 ms 00:18:18.676 Doorbell Stride: 4 bytes 00:18:18.676 NVM Subsystem Reset: Not Supported 00:18:18.676 Command Sets Supported 00:18:18.676 NVM Command Set: Supported 00:18:18.676 Boot Partition: Not Supported 00:18:18.676 Memory Page Size Minimum: 4096 bytes 00:18:18.676 Memory Page Size Maximum: 4096 bytes 00:18:18.676 Persistent Memory Region: Not Supported 00:18:18.676 Optional Asynchronous Events Supported 00:18:18.676 Namespace Attribute Notices: Not Supported 00:18:18.676 Firmware Activation Notices: Not Supported 00:18:18.676 ANA Change Notices: Not Supported 00:18:18.676 PLE Aggregate Log Change Notices: Not Supported 00:18:18.676 LBA Status Info Alert Notices: Not Supported 00:18:18.676 EGE Aggregate Log Change Notices: Not Supported 00:18:18.676 Normal NVM Subsystem Shutdown event: Not Supported 00:18:18.676 Zone Descriptor Change Notices: Not Supported 00:18:18.676 Discovery Log Change Notices: Supported 00:18:18.676 Controller Attributes 00:18:18.676 128-bit Host Identifier: Not Supported 00:18:18.676 Non-Operational Permissive Mode: Not Supported 00:18:18.676 NVM Sets: Not Supported 00:18:18.676 Read Recovery Levels: Not Supported 00:18:18.676 Endurance Groups: Not Supported 00:18:18.676 Predictable Latency Mode: Not Supported 00:18:18.676 Traffic Based Keep ALive: Not Supported 00:18:18.676 Namespace Granularity: Not Supported 00:18:18.676 SQ Associations: Not Supported 00:18:18.676 UUID List: Not Supported 00:18:18.676 Multi-Domain Subsystem: Not Supported 00:18:18.676 Fixed Capacity Management: Not Supported 00:18:18.676 Variable Capacity Management: Not Supported 00:18:18.676 Delete Endurance Group: Not Supported 00:18:18.676 Delete NVM Set: Not Supported 00:18:18.676 Extended LBA Formats Supported: Not Supported 00:18:18.676 Flexible Data Placement Supported: Not Supported 00:18:18.676 00:18:18.676 Controller Memory Buffer Support 00:18:18.676 ================================ 00:18:18.676 Supported: No 00:18:18.676 00:18:18.676 Persistent Memory Region Support 00:18:18.676 ================================ 00:18:18.676 Supported: No 00:18:18.676 00:18:18.676 Admin Command Set Attributes 00:18:18.676 ============================ 00:18:18.676 Security Send/Receive: Not Supported 00:18:18.676 Format NVM: Not Supported 00:18:18.676 Firmware Activate/Download: Not Supported 00:18:18.676 Namespace Management: Not Supported 00:18:18.676 Device Self-Test: Not Supported 00:18:18.676 Directives: Not Supported 00:18:18.676 NVMe-MI: Not Supported 00:18:18.676 Virtualization Management: Not Supported 00:18:18.676 Doorbell Buffer Config: Not Supported 00:18:18.676 Get LBA Status Capability: Not Supported 00:18:18.676 Command & Feature Lockdown Capability: Not Supported 00:18:18.676 Abort Command Limit: 1 00:18:18.676 Async Event Request Limit: 4 00:18:18.676 Number of Firmware Slots: N/A 00:18:18.676 Firmware Slot 1 Read-Only: N/A 00:18:18.676 Firmware Activation Without Reset: N/A 00:18:18.676 Multiple Update Detection Support: N/A 00:18:18.676 Firmware Update Granularity: No Information Provided 00:18:18.676 Per-Namespace SMART Log: No 00:18:18.676 Asymmetric Namespace Access Log Page: Not Supported 00:18:18.676 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:18:18.676 Command Effects Log Page: Not Supported 00:18:18.676 Get Log Page Extended Data: Supported 00:18:18.676 Telemetry Log Pages: Not Supported 00:18:18.676 Persistent Event Log Pages: Not Supported 00:18:18.676 Supported Log Pages Log Page: May Support 00:18:18.676 Commands Supported & Effects Log Page: Not Supported 00:18:18.676 Feature Identifiers & Effects Log Page:May Support 00:18:18.676 NVMe-MI Commands & Effects Log Page: May Support 00:18:18.676 Data Area 4 for Telemetry Log: Not Supported 00:18:18.676 Error Log Page Entries Supported: 128 00:18:18.676 Keep Alive: Not Supported 00:18:18.676 00:18:18.676 NVM Command Set Attributes 00:18:18.677 ========================== 00:18:18.677 Submission Queue Entry Size 00:18:18.677 Max: 1 00:18:18.677 Min: 1 00:18:18.677 Completion Queue Entry Size 00:18:18.677 Max: 1 00:18:18.677 Min: 1 00:18:18.677 Number of Namespaces: 0 00:18:18.677 Compare Command: Not Supported 00:18:18.677 Write Uncorrectable Command: Not Supported 00:18:18.677 Dataset Management Command: Not Supported 00:18:18.677 Write Zeroes Command: Not Supported 00:18:18.677 Set Features Save Field: Not Supported 00:18:18.677 Reservations: Not Supported 00:18:18.677 Timestamp: Not Supported 00:18:18.677 Copy: Not Supported 00:18:18.677 Volatile Write Cache: Not Present 00:18:18.677 Atomic Write Unit (Normal): 1 00:18:18.677 Atomic Write Unit (PFail): 1 00:18:18.677 Atomic Compare & Write Unit: 1 00:18:18.677 Fused Compare & Write: Supported 00:18:18.677 Scatter-Gather List 00:18:18.677 SGL Command Set: Supported 00:18:18.677 SGL Keyed: Supported 00:18:18.677 SGL Bit Bucket Descriptor: Not Supported 00:18:18.677 SGL Metadata Pointer: Not Supported 00:18:18.677 Oversized SGL: Not Supported 00:18:18.677 SGL Metadata Address: Not Supported 00:18:18.677 SGL Offset: Supported 00:18:18.677 Transport SGL Data Block: Not Supported 00:18:18.677 Replay Protected Memory Block: Not Supported 00:18:18.677 00:18:18.677 Firmware Slot Information 00:18:18.677 ========================= 00:18:18.677 Active slot: 0 00:18:18.677 00:18:18.677 00:18:18.677 Error Log 00:18:18.677 ========= 00:18:18.677 00:18:18.677 Active Namespaces 00:18:18.677 ================= 00:18:18.677 Discovery Log Page 00:18:18.677 ================== 00:18:18.677 Generation Counter: 2 00:18:18.677 Number of Records: 2 00:18:18.677 Record Format: 0 00:18:18.677 00:18:18.677 Discovery Log Entry 0 00:18:18.677 ---------------------- 00:18:18.677 Transport Type: 3 (TCP) 00:18:18.677 Address Family: 1 (IPv4) 00:18:18.677 Subsystem Type: 3 (Current Discovery Subsystem) 00:18:18.677 Entry Flags: 00:18:18.677 Duplicate Returned Information: 1 00:18:18.677 Explicit Persistent Connection Support for Discovery: 1 00:18:18.677 Transport Requirements: 00:18:18.677 Secure Channel: Not Required 00:18:18.677 Port ID: 0 (0x0000) 00:18:18.677 Controller ID: 65535 (0xffff) 00:18:18.677 Admin Max SQ Size: 128 00:18:18.677 Transport Service Identifier: 4420 00:18:18.677 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:18:18.677 Transport Address: 10.0.0.2 00:18:18.677 Discovery Log Entry 1 00:18:18.677 ---------------------- 00:18:18.677 Transport Type: 3 (TCP) 00:18:18.677 Address Family: 1 (IPv4) 00:18:18.677 Subsystem Type: 2 (NVM Subsystem) 00:18:18.677 Entry Flags: 00:18:18.677 Duplicate Returned Information: 0 00:18:18.677 Explicit Persistent Connection Support for Discovery: 0 00:18:18.677 Transport Requirements: 00:18:18.677 Secure Channel: Not Required 00:18:18.677 Port ID: 0 (0x0000) 00:18:18.677 Controller ID: 65535 (0xffff) 00:18:18.677 Admin Max SQ Size: 128 00:18:18.677 Transport Service Identifier: 4420 00:18:18.677 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:18:18.677 Transport Address: 10.0.0.2 [2024-04-15 16:10:48.465266] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:18:18.677 [2024-04-15 16:10:48.465281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.677 [2024-04-15 16:10:48.465289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.677 [2024-04-15 16:10:48.465296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.677 [2024-04-15 16:10:48.465304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.677 [2024-04-15 16:10:48.465313] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.677 [2024-04-15 16:10:48.465318] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.677 [2024-04-15 16:10:48.465323] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.677 [2024-04-15 16:10:48.465331] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.677 [2024-04-15 16:10:48.465359] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.677 [2024-04-15 16:10:48.465411] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.677 [2024-04-15 16:10:48.465418] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.677 [2024-04-15 16:10:48.465422] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.677 [2024-04-15 16:10:48.465427] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.677 [2024-04-15 16:10:48.465440] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.677 [2024-04-15 16:10:48.465445] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.677 [2024-04-15 16:10:48.465450] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.677 [2024-04-15 16:10:48.465458] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.677 [2024-04-15 16:10:48.465476] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.677 [2024-04-15 16:10:48.465537] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.677 [2024-04-15 16:10:48.465544] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.677 [2024-04-15 16:10:48.465549] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.677 [2024-04-15 16:10:48.465554] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.677 [2024-04-15 16:10:48.465561] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:18:18.677 [2024-04-15 16:10:48.465567] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:18:18.677 [2024-04-15 16:10:48.465588] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.677 [2024-04-15 16:10:48.465593] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.677 [2024-04-15 16:10:48.465598] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.677 [2024-04-15 16:10:48.465605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.677 [2024-04-15 16:10:48.465621] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.677 [2024-04-15 16:10:48.465666] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.677 [2024-04-15 16:10:48.465673] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.677 [2024-04-15 16:10:48.465678] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.677 [2024-04-15 16:10:48.465683] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.677 [2024-04-15 16:10:48.465695] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.677 [2024-04-15 16:10:48.465700] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.677 [2024-04-15 16:10:48.465705] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.677 [2024-04-15 16:10:48.465712] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.677 [2024-04-15 16:10:48.465727] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.677 [2024-04-15 16:10:48.465772] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.677 [2024-04-15 16:10:48.465779] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.677 [2024-04-15 16:10:48.465783] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.677 [2024-04-15 16:10:48.465788] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.677 [2024-04-15 16:10:48.465799] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.677 [2024-04-15 16:10:48.465804] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.677 [2024-04-15 16:10:48.465809] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.677 [2024-04-15 16:10:48.465817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.677 [2024-04-15 16:10:48.465831] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.677 [2024-04-15 16:10:48.465873] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.677 [2024-04-15 16:10:48.465880] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.677 [2024-04-15 16:10:48.465885] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.678 [2024-04-15 16:10:48.465890] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.678 [2024-04-15 16:10:48.465901] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.678 [2024-04-15 16:10:48.465906] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.678 [2024-04-15 16:10:48.465911] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.678 [2024-04-15 16:10:48.465918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.678 [2024-04-15 16:10:48.465933] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.678 [2024-04-15 16:10:48.465975] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.678 [2024-04-15 16:10:48.465982] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.678 [2024-04-15 16:10:48.465987] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.678 [2024-04-15 16:10:48.465992] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.678 [2024-04-15 16:10:48.466003] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.678 [2024-04-15 16:10:48.466008] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.678 [2024-04-15 16:10:48.466013] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.678 [2024-04-15 16:10:48.466020] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.678 [2024-04-15 16:10:48.466035] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.678 [2024-04-15 16:10:48.466086] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.678 [2024-04-15 16:10:48.466093] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.678 [2024-04-15 16:10:48.466097] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.678 [2024-04-15 16:10:48.466102] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.678 [2024-04-15 16:10:48.466114] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.678 [2024-04-15 16:10:48.466119] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.678 [2024-04-15 16:10:48.466123] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.678 [2024-04-15 16:10:48.466131] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.678 [2024-04-15 16:10:48.466146] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.678 [2024-04-15 16:10:48.466193] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.678 [2024-04-15 16:10:48.466200] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.678 [2024-04-15 16:10:48.466205] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.678 [2024-04-15 16:10:48.466209] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.678 [2024-04-15 16:10:48.466221] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.678 [2024-04-15 16:10:48.466226] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.678 [2024-04-15 16:10:48.466230] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.678 [2024-04-15 16:10:48.466238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.678 [2024-04-15 16:10:48.466252] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.678 [2024-04-15 16:10:48.466297] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.678 [2024-04-15 16:10:48.466304] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.678 [2024-04-15 16:10:48.466309] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.678 [2024-04-15 16:10:48.466314] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.678 [2024-04-15 16:10:48.466325] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.678 [2024-04-15 16:10:48.466330] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.678 [2024-04-15 16:10:48.466334] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.678 [2024-04-15 16:10:48.466342] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.678 [2024-04-15 16:10:48.466357] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.678 [2024-04-15 16:10:48.466405] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.678 [2024-04-15 16:10:48.466412] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.678 [2024-04-15 16:10:48.466416] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.678 [2024-04-15 16:10:48.466421] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.678 [2024-04-15 16:10:48.466433] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.678 [2024-04-15 16:10:48.466438] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.678 [2024-04-15 16:10:48.466442] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.678 [2024-04-15 16:10:48.466450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.678 [2024-04-15 16:10:48.466465] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.678 [2024-04-15 16:10:48.466513] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.678 [2024-04-15 16:10:48.466520] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.678 [2024-04-15 16:10:48.466524] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.678 [2024-04-15 16:10:48.466529] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.678 [2024-04-15 16:10:48.466541] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.678 [2024-04-15 16:10:48.466546] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.678 [2024-04-15 16:10:48.466550] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.678 [2024-04-15 16:10:48.466558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.678 [2024-04-15 16:10:48.466581] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.678 [2024-04-15 16:10:48.466632] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.678 [2024-04-15 16:10:48.466640] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.678 [2024-04-15 16:10:48.466644] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.678 [2024-04-15 16:10:48.466649] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.678 [2024-04-15 16:10:48.466661] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.678 [2024-04-15 16:10:48.466666] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.678 [2024-04-15 16:10:48.466670] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.678 [2024-04-15 16:10:48.466678] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.678 [2024-04-15 16:10:48.466694] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.678 [2024-04-15 16:10:48.466745] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.678 [2024-04-15 16:10:48.466752] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.678 [2024-04-15 16:10:48.466757] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.678 [2024-04-15 16:10:48.466762] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.678 [2024-04-15 16:10:48.466773] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.678 [2024-04-15 16:10:48.466778] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.678 [2024-04-15 16:10:48.466783] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.678 [2024-04-15 16:10:48.466790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.678 [2024-04-15 16:10:48.466805] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.678 [2024-04-15 16:10:48.466850] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.678 [2024-04-15 16:10:48.466857] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.678 [2024-04-15 16:10:48.466862] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.679 [2024-04-15 16:10:48.466866] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.679 [2024-04-15 16:10:48.466878] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.679 [2024-04-15 16:10:48.466883] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.679 [2024-04-15 16:10:48.466887] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.679 [2024-04-15 16:10:48.466895] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.679 [2024-04-15 16:10:48.466910] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.679 [2024-04-15 16:10:48.466952] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.679 [2024-04-15 16:10:48.466959] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.679 [2024-04-15 16:10:48.466964] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.679 [2024-04-15 16:10:48.466969] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.679 [2024-04-15 16:10:48.466980] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.679 [2024-04-15 16:10:48.466985] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.679 [2024-04-15 16:10:48.466990] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.679 [2024-04-15 16:10:48.466998] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.679 [2024-04-15 16:10:48.467012] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.679 [2024-04-15 16:10:48.467063] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.679 [2024-04-15 16:10:48.467070] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.679 [2024-04-15 16:10:48.467074] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.679 [2024-04-15 16:10:48.467079] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.679 [2024-04-15 16:10:48.467091] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.679 [2024-04-15 16:10:48.467096] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.679 [2024-04-15 16:10:48.467100] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.679 [2024-04-15 16:10:48.467108] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.679 [2024-04-15 16:10:48.467123] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.679 [2024-04-15 16:10:48.467173] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.679 [2024-04-15 16:10:48.467181] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.679 [2024-04-15 16:10:48.467185] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.679 [2024-04-15 16:10:48.467190] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.679 [2024-04-15 16:10:48.467201] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.679 [2024-04-15 16:10:48.467206] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.679 [2024-04-15 16:10:48.467211] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.679 [2024-04-15 16:10:48.467218] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.679 [2024-04-15 16:10:48.467233] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.679 [2024-04-15 16:10:48.467284] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.679 [2024-04-15 16:10:48.467291] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.679 [2024-04-15 16:10:48.467295] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.679 [2024-04-15 16:10:48.467300] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.679 [2024-04-15 16:10:48.467312] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.679 [2024-04-15 16:10:48.467317] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.679 [2024-04-15 16:10:48.467321] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.679 [2024-04-15 16:10:48.467329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.679 [2024-04-15 16:10:48.467344] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.679 [2024-04-15 16:10:48.467385] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.679 [2024-04-15 16:10:48.467392] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.679 [2024-04-15 16:10:48.467397] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.679 [2024-04-15 16:10:48.467402] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.679 [2024-04-15 16:10:48.467414] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.679 [2024-04-15 16:10:48.467419] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.679 [2024-04-15 16:10:48.467423] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.679 [2024-04-15 16:10:48.467431] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.679 [2024-04-15 16:10:48.467445] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.679 [2024-04-15 16:10:48.467490] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.679 [2024-04-15 16:10:48.467497] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.679 [2024-04-15 16:10:48.467502] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.679 [2024-04-15 16:10:48.467507] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.679 [2024-04-15 16:10:48.467518] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.679 [2024-04-15 16:10:48.467523] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.679 [2024-04-15 16:10:48.467528] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.679 [2024-04-15 16:10:48.467535] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.679 [2024-04-15 16:10:48.467550] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.679 [2024-04-15 16:10:48.467601] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.679 [2024-04-15 16:10:48.467609] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.679 [2024-04-15 16:10:48.467613] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.679 [2024-04-15 16:10:48.467618] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.679 [2024-04-15 16:10:48.467630] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.679 [2024-04-15 16:10:48.467635] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.679 [2024-04-15 16:10:48.467639] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.679 [2024-04-15 16:10:48.467647] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.679 [2024-04-15 16:10:48.467663] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.679 [2024-04-15 16:10:48.467711] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.679 [2024-04-15 16:10:48.467718] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.679 [2024-04-15 16:10:48.467723] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.679 [2024-04-15 16:10:48.467728] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.679 [2024-04-15 16:10:48.467739] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.679 [2024-04-15 16:10:48.467744] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.679 [2024-04-15 16:10:48.467749] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.679 [2024-04-15 16:10:48.467756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.679 [2024-04-15 16:10:48.467771] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.680 [2024-04-15 16:10:48.467825] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.680 [2024-04-15 16:10:48.467832] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.680 [2024-04-15 16:10:48.467837] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.680 [2024-04-15 16:10:48.467842] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.680 [2024-04-15 16:10:48.467853] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.680 [2024-04-15 16:10:48.467858] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.680 [2024-04-15 16:10:48.467862] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.680 [2024-04-15 16:10:48.467870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.680 [2024-04-15 16:10:48.467885] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.680 [2024-04-15 16:10:48.467935] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.680 [2024-04-15 16:10:48.467942] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.680 [2024-04-15 16:10:48.467947] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.680 [2024-04-15 16:10:48.467952] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.680 [2024-04-15 16:10:48.467963] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.680 [2024-04-15 16:10:48.467968] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.680 [2024-04-15 16:10:48.467973] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.680 [2024-04-15 16:10:48.467980] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.680 [2024-04-15 16:10:48.467995] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.680 [2024-04-15 16:10:48.468046] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.680 [2024-04-15 16:10:48.468053] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.680 [2024-04-15 16:10:48.468057] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.680 [2024-04-15 16:10:48.468062] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.680 [2024-04-15 16:10:48.468074] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.680 [2024-04-15 16:10:48.468079] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.680 [2024-04-15 16:10:48.468083] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.680 [2024-04-15 16:10:48.468091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.680 [2024-04-15 16:10:48.468105] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.680 [2024-04-15 16:10:48.468159] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.680 [2024-04-15 16:10:48.468166] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.680 [2024-04-15 16:10:48.468171] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.680 [2024-04-15 16:10:48.468176] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.680 [2024-04-15 16:10:48.468187] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.680 [2024-04-15 16:10:48.468192] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.680 [2024-04-15 16:10:48.468197] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.680 [2024-04-15 16:10:48.468204] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.680 [2024-04-15 16:10:48.468219] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.680 [2024-04-15 16:10:48.468267] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.680 [2024-04-15 16:10:48.468273] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.680 [2024-04-15 16:10:48.468278] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.680 [2024-04-15 16:10:48.468283] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.680 [2024-04-15 16:10:48.468295] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.680 [2024-04-15 16:10:48.468300] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.680 [2024-04-15 16:10:48.468304] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.680 [2024-04-15 16:10:48.468312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.680 [2024-04-15 16:10:48.468326] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.680 [2024-04-15 16:10:48.468374] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.680 [2024-04-15 16:10:48.468381] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.680 [2024-04-15 16:10:48.468385] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.680 [2024-04-15 16:10:48.468390] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.680 [2024-04-15 16:10:48.468401] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.680 [2024-04-15 16:10:48.468406] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.680 [2024-04-15 16:10:48.468411] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.680 [2024-04-15 16:10:48.468419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.680 [2024-04-15 16:10:48.468433] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.680 [2024-04-15 16:10:48.468484] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.680 [2024-04-15 16:10:48.468491] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.680 [2024-04-15 16:10:48.468496] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.680 [2024-04-15 16:10:48.468501] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.680 [2024-04-15 16:10:48.468512] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.680 [2024-04-15 16:10:48.468518] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.680 [2024-04-15 16:10:48.468523] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.680 [2024-04-15 16:10:48.468530] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.680 [2024-04-15 16:10:48.468545] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.680 [2024-04-15 16:10:48.468604] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.680 [2024-04-15 16:10:48.468612] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.680 [2024-04-15 16:10:48.468616] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.680 [2024-04-15 16:10:48.468621] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.680 [2024-04-15 16:10:48.468633] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.680 [2024-04-15 16:10:48.468638] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.680 [2024-04-15 16:10:48.468642] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.680 [2024-04-15 16:10:48.468650] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.680 [2024-04-15 16:10:48.468666] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.680 [2024-04-15 16:10:48.468717] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.680 [2024-04-15 16:10:48.468724] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.680 [2024-04-15 16:10:48.468729] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.680 [2024-04-15 16:10:48.468734] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.680 [2024-04-15 16:10:48.468745] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.680 [2024-04-15 16:10:48.468750] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.680 [2024-04-15 16:10:48.468755] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.680 [2024-04-15 16:10:48.468762] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.680 [2024-04-15 16:10:48.468777] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.680 [2024-04-15 16:10:48.468825] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.680 [2024-04-15 16:10:48.468832] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.680 [2024-04-15 16:10:48.468837] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.680 [2024-04-15 16:10:48.468842] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.680 [2024-04-15 16:10:48.468853] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.680 [2024-04-15 16:10:48.468858] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.680 [2024-04-15 16:10:48.468863] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.680 [2024-04-15 16:10:48.468870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.680 [2024-04-15 16:10:48.468885] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.680 [2024-04-15 16:10:48.468930] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.680 [2024-04-15 16:10:48.468937] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.680 [2024-04-15 16:10:48.468941] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.680 [2024-04-15 16:10:48.468946] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.680 [2024-04-15 16:10:48.468958] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.680 [2024-04-15 16:10:48.468963] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.680 [2024-04-15 16:10:48.468967] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.680 [2024-04-15 16:10:48.468975] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.680 [2024-04-15 16:10:48.468989] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.680 [2024-04-15 16:10:48.469031] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.680 [2024-04-15 16:10:48.469038] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.680 [2024-04-15 16:10:48.469042] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.680 [2024-04-15 16:10:48.469047] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.680 [2024-04-15 16:10:48.469059] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.680 [2024-04-15 16:10:48.469064] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.680 [2024-04-15 16:10:48.469068] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.681 [2024-04-15 16:10:48.469076] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.681 [2024-04-15 16:10:48.469091] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.681 [2024-04-15 16:10:48.469141] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.681 [2024-04-15 16:10:48.469148] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.681 [2024-04-15 16:10:48.469153] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.681 [2024-04-15 16:10:48.469158] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.681 [2024-04-15 16:10:48.469169] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.681 [2024-04-15 16:10:48.469174] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.681 [2024-04-15 16:10:48.469179] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.681 [2024-04-15 16:10:48.469186] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.681 [2024-04-15 16:10:48.469201] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.681 [2024-04-15 16:10:48.469242] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.681 [2024-04-15 16:10:48.469249] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.681 [2024-04-15 16:10:48.469254] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.681 [2024-04-15 16:10:48.469259] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.681 [2024-04-15 16:10:48.469270] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.681 [2024-04-15 16:10:48.469275] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.681 [2024-04-15 16:10:48.469280] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.681 [2024-04-15 16:10:48.469287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.681 [2024-04-15 16:10:48.469303] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.681 [2024-04-15 16:10:48.469366] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.681 [2024-04-15 16:10:48.469373] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.681 [2024-04-15 16:10:48.469378] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.681 [2024-04-15 16:10:48.469383] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.681 [2024-04-15 16:10:48.469394] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.681 [2024-04-15 16:10:48.469399] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.681 [2024-04-15 16:10:48.469404] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.681 [2024-04-15 16:10:48.469411] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.681 [2024-04-15 16:10:48.469427] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.681 [2024-04-15 16:10:48.469471] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.681 [2024-04-15 16:10:48.469478] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.681 [2024-04-15 16:10:48.469482] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.681 [2024-04-15 16:10:48.469487] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.681 [2024-04-15 16:10:48.469498] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.681 [2024-04-15 16:10:48.469503] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.681 [2024-04-15 16:10:48.469508] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.681 [2024-04-15 16:10:48.469515] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.681 [2024-04-15 16:10:48.469530] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.681 [2024-04-15 16:10:48.469590] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.681 [2024-04-15 16:10:48.469598] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.681 [2024-04-15 16:10:48.469602] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.681 [2024-04-15 16:10:48.469607] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.681 [2024-04-15 16:10:48.469619] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.681 [2024-04-15 16:10:48.469624] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.681 [2024-04-15 16:10:48.469629] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.681 [2024-04-15 16:10:48.469636] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.681 [2024-04-15 16:10:48.469652] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.681 [2024-04-15 16:10:48.469700] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.681 [2024-04-15 16:10:48.469707] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.681 [2024-04-15 16:10:48.469712] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.681 [2024-04-15 16:10:48.469716] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.681 [2024-04-15 16:10:48.469728] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.681 [2024-04-15 16:10:48.469733] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.681 [2024-04-15 16:10:48.469737] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.681 [2024-04-15 16:10:48.469745] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.681 [2024-04-15 16:10:48.469760] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.681 [2024-04-15 16:10:48.469810] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.681 [2024-04-15 16:10:48.469817] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.681 [2024-04-15 16:10:48.469822] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.681 [2024-04-15 16:10:48.469827] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.681 [2024-04-15 16:10:48.469838] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.681 [2024-04-15 16:10:48.469843] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.681 [2024-04-15 16:10:48.469848] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.681 [2024-04-15 16:10:48.469855] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.681 [2024-04-15 16:10:48.469870] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.681 [2024-04-15 16:10:48.469915] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.681 [2024-04-15 16:10:48.469922] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.681 [2024-04-15 16:10:48.469926] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.681 [2024-04-15 16:10:48.469931] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.681 [2024-04-15 16:10:48.469943] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.681 [2024-04-15 16:10:48.469948] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.681 [2024-04-15 16:10:48.469952] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.681 [2024-04-15 16:10:48.469960] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.681 [2024-04-15 16:10:48.469975] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.681 [2024-04-15 16:10:48.470026] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.681 [2024-04-15 16:10:48.470033] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.681 [2024-04-15 16:10:48.470038] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.681 [2024-04-15 16:10:48.470043] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.681 [2024-04-15 16:10:48.470054] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.681 [2024-04-15 16:10:48.470059] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.681 [2024-04-15 16:10:48.470064] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.681 [2024-04-15 16:10:48.470071] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.681 [2024-04-15 16:10:48.470086] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.681 [2024-04-15 16:10:48.470137] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.681 [2024-04-15 16:10:48.470143] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.681 [2024-04-15 16:10:48.470148] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.681 [2024-04-15 16:10:48.470153] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.681 [2024-04-15 16:10:48.470164] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.681 [2024-04-15 16:10:48.470169] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.681 [2024-04-15 16:10:48.470174] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.681 [2024-04-15 16:10:48.470181] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.681 [2024-04-15 16:10:48.470196] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.681 [2024-04-15 16:10:48.470240] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.681 [2024-04-15 16:10:48.470247] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.681 [2024-04-15 16:10:48.470252] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.681 [2024-04-15 16:10:48.470257] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.681 [2024-04-15 16:10:48.470268] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.681 [2024-04-15 16:10:48.470273] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.681 [2024-04-15 16:10:48.470277] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.681 [2024-04-15 16:10:48.470285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.681 [2024-04-15 16:10:48.470299] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.681 [2024-04-15 16:10:48.470344] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.681 [2024-04-15 16:10:48.470351] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.682 [2024-04-15 16:10:48.470356] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.682 [2024-04-15 16:10:48.470361] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.682 [2024-04-15 16:10:48.470372] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.682 [2024-04-15 16:10:48.470377] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.682 [2024-04-15 16:10:48.470382] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.682 [2024-04-15 16:10:48.470389] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.682 [2024-04-15 16:10:48.470404] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.682 [2024-04-15 16:10:48.470449] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.682 [2024-04-15 16:10:48.470456] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.682 [2024-04-15 16:10:48.470460] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.682 [2024-04-15 16:10:48.470465] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.682 [2024-04-15 16:10:48.470476] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.682 [2024-04-15 16:10:48.470481] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.682 [2024-04-15 16:10:48.470486] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.682 [2024-04-15 16:10:48.470494] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.682 [2024-04-15 16:10:48.470509] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.682 [2024-04-15 16:10:48.470551] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.682 [2024-04-15 16:10:48.470558] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.682 [2024-04-15 16:10:48.470562] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.682 [2024-04-15 16:10:48.470567] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.682 [2024-04-15 16:10:48.470586] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.682 [2024-04-15 16:10:48.470592] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.682 [2024-04-15 16:10:48.470596] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.682 [2024-04-15 16:10:48.470604] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.682 [2024-04-15 16:10:48.470619] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.682 [2024-04-15 16:10:48.470664] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.682 [2024-04-15 16:10:48.470671] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.682 [2024-04-15 16:10:48.470675] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.682 [2024-04-15 16:10:48.470680] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.682 [2024-04-15 16:10:48.470691] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.682 [2024-04-15 16:10:48.470696] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.682 [2024-04-15 16:10:48.470701] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.682 [2024-04-15 16:10:48.470708] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.682 [2024-04-15 16:10:48.470723] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.682 [2024-04-15 16:10:48.470765] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.682 [2024-04-15 16:10:48.470772] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.682 [2024-04-15 16:10:48.470776] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.682 [2024-04-15 16:10:48.470781] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.682 [2024-04-15 16:10:48.470803] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.682 [2024-04-15 16:10:48.470808] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.682 [2024-04-15 16:10:48.470812] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.682 [2024-04-15 16:10:48.470820] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.682 [2024-04-15 16:10:48.470834] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.682 [2024-04-15 16:10:48.470877] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.682 [2024-04-15 16:10:48.470884] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.682 [2024-04-15 16:10:48.470889] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.682 [2024-04-15 16:10:48.470893] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.682 [2024-04-15 16:10:48.470904] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.682 [2024-04-15 16:10:48.470909] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.682 [2024-04-15 16:10:48.470914] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.682 [2024-04-15 16:10:48.470921] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.682 [2024-04-15 16:10:48.470936] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.682 [2024-04-15 16:10:48.470985] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.682 [2024-04-15 16:10:48.470992] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.682 [2024-04-15 16:10:48.470996] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.682 [2024-04-15 16:10:48.471001] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.682 [2024-04-15 16:10:48.471012] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.682 [2024-04-15 16:10:48.471016] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.682 [2024-04-15 16:10:48.471021] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.682 [2024-04-15 16:10:48.471028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.682 [2024-04-15 16:10:48.471043] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.682 [2024-04-15 16:10:48.471090] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.682 [2024-04-15 16:10:48.471096] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.682 [2024-04-15 16:10:48.471101] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.682 [2024-04-15 16:10:48.471105] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.682 [2024-04-15 16:10:48.471116] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.682 [2024-04-15 16:10:48.471121] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.682 [2024-04-15 16:10:48.471126] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.682 [2024-04-15 16:10:48.471133] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.682 [2024-04-15 16:10:48.471147] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.682 [2024-04-15 16:10:48.471199] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.682 [2024-04-15 16:10:48.471208] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.682 [2024-04-15 16:10:48.471212] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.682 [2024-04-15 16:10:48.471217] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.682 [2024-04-15 16:10:48.471228] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.682 [2024-04-15 16:10:48.471233] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.682 [2024-04-15 16:10:48.471238] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.682 [2024-04-15 16:10:48.471245] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.682 [2024-04-15 16:10:48.471260] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.682 [2024-04-15 16:10:48.471306] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.682 [2024-04-15 16:10:48.471314] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.682 [2024-04-15 16:10:48.471318] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.682 [2024-04-15 16:10:48.471323] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.682 [2024-04-15 16:10:48.471334] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.682 [2024-04-15 16:10:48.471339] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.682 [2024-04-15 16:10:48.471344] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.682 [2024-04-15 16:10:48.471351] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.682 [2024-04-15 16:10:48.471366] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.682 [2024-04-15 16:10:48.471406] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.682 [2024-04-15 16:10:48.471413] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.682 [2024-04-15 16:10:48.471417] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.682 [2024-04-15 16:10:48.471422] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.682 [2024-04-15 16:10:48.471433] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.682 [2024-04-15 16:10:48.471438] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.682 [2024-04-15 16:10:48.471443] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.682 [2024-04-15 16:10:48.471450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.682 [2024-04-15 16:10:48.471464] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.682 [2024-04-15 16:10:48.471508] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.682 [2024-04-15 16:10:48.471515] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.682 [2024-04-15 16:10:48.471520] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.682 [2024-04-15 16:10:48.471524] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.682 [2024-04-15 16:10:48.471536] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.682 [2024-04-15 16:10:48.471540] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.683 [2024-04-15 16:10:48.471545] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.683 [2024-04-15 16:10:48.471552] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.683 [2024-04-15 16:10:48.471566] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.683 [2024-04-15 16:10:48.471620] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.683 [2024-04-15 16:10:48.471648] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.683 [2024-04-15 16:10:48.471653] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.683 [2024-04-15 16:10:48.471658] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.683 [2024-04-15 16:10:48.471669] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.683 [2024-04-15 16:10:48.471674] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.683 [2024-04-15 16:10:48.471679] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.683 [2024-04-15 16:10:48.471687] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.683 [2024-04-15 16:10:48.471703] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.683 [2024-04-15 16:10:48.471750] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.683 [2024-04-15 16:10:48.471757] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.683 [2024-04-15 16:10:48.471762] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.683 [2024-04-15 16:10:48.471767] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.683 [2024-04-15 16:10:48.471778] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.683 [2024-04-15 16:10:48.471783] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.683 [2024-04-15 16:10:48.471788] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.683 [2024-04-15 16:10:48.471795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.683 [2024-04-15 16:10:48.471810] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.683 [2024-04-15 16:10:48.471861] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.683 [2024-04-15 16:10:48.471875] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.683 [2024-04-15 16:10:48.471880] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.683 [2024-04-15 16:10:48.471885] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.683 [2024-04-15 16:10:48.471897] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.683 [2024-04-15 16:10:48.471902] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.683 [2024-04-15 16:10:48.471907] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.683 [2024-04-15 16:10:48.471914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.683 [2024-04-15 16:10:48.471930] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.683 [2024-04-15 16:10:48.471975] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.683 [2024-04-15 16:10:48.471987] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.683 [2024-04-15 16:10:48.471993] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.683 [2024-04-15 16:10:48.471997] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.683 [2024-04-15 16:10:48.472009] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.683 [2024-04-15 16:10:48.472014] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.683 [2024-04-15 16:10:48.472019] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.683 [2024-04-15 16:10:48.472026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.683 [2024-04-15 16:10:48.472042] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.683 [2024-04-15 16:10:48.472090] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.683 [2024-04-15 16:10:48.472104] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.683 [2024-04-15 16:10:48.472109] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.683 [2024-04-15 16:10:48.472114] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.683 [2024-04-15 16:10:48.472126] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.683 [2024-04-15 16:10:48.472131] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.683 [2024-04-15 16:10:48.472135] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.683 [2024-04-15 16:10:48.472143] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.683 [2024-04-15 16:10:48.472158] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.683 [2024-04-15 16:10:48.472202] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.683 [2024-04-15 16:10:48.472210] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.683 [2024-04-15 16:10:48.472214] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.683 [2024-04-15 16:10:48.472220] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.683 [2024-04-15 16:10:48.472231] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.683 [2024-04-15 16:10:48.472236] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.683 [2024-04-15 16:10:48.472241] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.683 [2024-04-15 16:10:48.472248] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.683 [2024-04-15 16:10:48.472263] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.683 [2024-04-15 16:10:48.472314] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.683 [2024-04-15 16:10:48.472321] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.683 [2024-04-15 16:10:48.472326] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.683 [2024-04-15 16:10:48.472331] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.683 [2024-04-15 16:10:48.472342] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.683 [2024-04-15 16:10:48.472347] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.683 [2024-04-15 16:10:48.472352] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.683 [2024-04-15 16:10:48.472360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.683 [2024-04-15 16:10:48.472374] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.683 [2024-04-15 16:10:48.472423] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.683 [2024-04-15 16:10:48.472430] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.683 [2024-04-15 16:10:48.472435] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.683 [2024-04-15 16:10:48.472440] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.683 [2024-04-15 16:10:48.472451] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.683 [2024-04-15 16:10:48.472456] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.683 [2024-04-15 16:10:48.472461] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.683 [2024-04-15 16:10:48.472469] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.683 [2024-04-15 16:10:48.472484] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.683 [2024-04-15 16:10:48.472526] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.683 [2024-04-15 16:10:48.472533] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.683 [2024-04-15 16:10:48.472538] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.683 [2024-04-15 16:10:48.472543] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.683 [2024-04-15 16:10:48.472554] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.683 [2024-04-15 16:10:48.472560] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.683 [2024-04-15 16:10:48.472564] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.683 [2024-04-15 16:10:48.472572] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.684 [2024-04-15 16:10:48.472595] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.684 [2024-04-15 16:10:48.472649] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.684 [2024-04-15 16:10:48.472656] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.684 [2024-04-15 16:10:48.472660] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.684 [2024-04-15 16:10:48.472665] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.684 [2024-04-15 16:10:48.472677] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.684 [2024-04-15 16:10:48.472681] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.684 [2024-04-15 16:10:48.472686] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.684 [2024-04-15 16:10:48.472694] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.684 [2024-04-15 16:10:48.472709] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.684 [2024-04-15 16:10:48.472753] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.684 [2024-04-15 16:10:48.472760] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.684 [2024-04-15 16:10:48.472764] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.684 [2024-04-15 16:10:48.472769] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.684 [2024-04-15 16:10:48.472781] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.684 [2024-04-15 16:10:48.472786] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.684 [2024-04-15 16:10:48.472790] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.684 [2024-04-15 16:10:48.472798] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.684 [2024-04-15 16:10:48.472813] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.684 [2024-04-15 16:10:48.472855] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.684 [2024-04-15 16:10:48.472866] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.684 [2024-04-15 16:10:48.472871] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.684 [2024-04-15 16:10:48.472876] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.684 [2024-04-15 16:10:48.472887] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.684 [2024-04-15 16:10:48.472892] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.684 [2024-04-15 16:10:48.472897] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.684 [2024-04-15 16:10:48.472905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.684 [2024-04-15 16:10:48.472920] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.684 [2024-04-15 16:10:48.472968] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.684 [2024-04-15 16:10:48.472975] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.684 [2024-04-15 16:10:48.472979] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.684 [2024-04-15 16:10:48.472984] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.684 [2024-04-15 16:10:48.472995] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.684 [2024-04-15 16:10:48.473000] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.684 [2024-04-15 16:10:48.473005] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.684 [2024-04-15 16:10:48.473012] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.684 [2024-04-15 16:10:48.473027] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.684 [2024-04-15 16:10:48.473075] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.684 [2024-04-15 16:10:48.473082] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.684 [2024-04-15 16:10:48.473086] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.684 [2024-04-15 16:10:48.473091] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.684 [2024-04-15 16:10:48.473103] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.684 [2024-04-15 16:10:48.473108] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.684 [2024-04-15 16:10:48.473112] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.684 [2024-04-15 16:10:48.473120] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.684 [2024-04-15 16:10:48.473134] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.684 [2024-04-15 16:10:48.473179] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.684 [2024-04-15 16:10:48.473186] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.684 [2024-04-15 16:10:48.473190] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.684 [2024-04-15 16:10:48.473195] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.684 [2024-04-15 16:10:48.473207] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.684 [2024-04-15 16:10:48.473211] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.684 [2024-04-15 16:10:48.473216] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.684 [2024-04-15 16:10:48.473224] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.684 [2024-04-15 16:10:48.473238] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.684 [2024-04-15 16:10:48.473283] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.684 [2024-04-15 16:10:48.473290] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.684 [2024-04-15 16:10:48.473294] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.684 [2024-04-15 16:10:48.473299] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.684 [2024-04-15 16:10:48.473311] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.684 [2024-04-15 16:10:48.473316] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.684 [2024-04-15 16:10:48.473320] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.684 [2024-04-15 16:10:48.473328] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.684 [2024-04-15 16:10:48.473351] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.684 [2024-04-15 16:10:48.473402] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.684 [2024-04-15 16:10:48.473412] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.684 [2024-04-15 16:10:48.473417] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.684 [2024-04-15 16:10:48.473422] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.684 [2024-04-15 16:10:48.473433] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.684 [2024-04-15 16:10:48.473438] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.684 [2024-04-15 16:10:48.473443] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.684 [2024-04-15 16:10:48.473451] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.684 [2024-04-15 16:10:48.473466] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.684 [2024-04-15 16:10:48.473513] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.684 [2024-04-15 16:10:48.473523] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.684 [2024-04-15 16:10:48.473528] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.684 [2024-04-15 16:10:48.473533] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.684 [2024-04-15 16:10:48.473545] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.684 [2024-04-15 16:10:48.473550] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.684 [2024-04-15 16:10:48.473554] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.684 [2024-04-15 16:10:48.473562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.685 [2024-04-15 16:10:48.473590] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.685 [2024-04-15 16:10:48.473632] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.685 [2024-04-15 16:10:48.473642] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.685 [2024-04-15 16:10:48.473647] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.685 [2024-04-15 16:10:48.473652] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.685 [2024-04-15 16:10:48.473663] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.685 [2024-04-15 16:10:48.473668] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.685 [2024-04-15 16:10:48.473673] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.685 [2024-04-15 16:10:48.473680] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.685 [2024-04-15 16:10:48.473696] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.685 [2024-04-15 16:10:48.473743] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.685 [2024-04-15 16:10:48.473750] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.685 [2024-04-15 16:10:48.473755] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.685 [2024-04-15 16:10:48.473760] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.685 [2024-04-15 16:10:48.473771] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.685 [2024-04-15 16:10:48.473776] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.685 [2024-04-15 16:10:48.473781] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.685 [2024-04-15 16:10:48.473789] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.685 [2024-04-15 16:10:48.473803] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.685 [2024-04-15 16:10:48.473854] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.685 [2024-04-15 16:10:48.473864] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.685 [2024-04-15 16:10:48.473869] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.685 [2024-04-15 16:10:48.473874] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.685 [2024-04-15 16:10:48.473886] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.685 [2024-04-15 16:10:48.473891] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.685 [2024-04-15 16:10:48.473895] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.685 [2024-04-15 16:10:48.473903] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.685 [2024-04-15 16:10:48.473918] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.685 [2024-04-15 16:10:48.473962] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.685 [2024-04-15 16:10:48.473969] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.685 [2024-04-15 16:10:48.473974] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.685 [2024-04-15 16:10:48.473979] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.685 [2024-04-15 16:10:48.473990] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.685 [2024-04-15 16:10:48.473995] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.685 [2024-04-15 16:10:48.474000] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.685 [2024-04-15 16:10:48.474008] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.685 [2024-04-15 16:10:48.474022] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.685 [2024-04-15 16:10:48.474070] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.685 [2024-04-15 16:10:48.474077] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.685 [2024-04-15 16:10:48.474082] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.685 [2024-04-15 16:10:48.474086] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.685 [2024-04-15 16:10:48.474098] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.685 [2024-04-15 16:10:48.474103] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.685 [2024-04-15 16:10:48.474108] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.685 [2024-04-15 16:10:48.474115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.685 [2024-04-15 16:10:48.474130] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.685 [2024-04-15 16:10:48.474172] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.685 [2024-04-15 16:10:48.474182] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.685 [2024-04-15 16:10:48.474187] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.685 [2024-04-15 16:10:48.474192] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.685 [2024-04-15 16:10:48.474203] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.685 [2024-04-15 16:10:48.474208] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.685 [2024-04-15 16:10:48.474213] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.685 [2024-04-15 16:10:48.474221] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.685 [2024-04-15 16:10:48.474236] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.685 [2024-04-15 16:10:48.474285] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.685 [2024-04-15 16:10:48.474295] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.685 [2024-04-15 16:10:48.474300] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.685 [2024-04-15 16:10:48.474305] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.685 [2024-04-15 16:10:48.474316] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.685 [2024-04-15 16:10:48.474321] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.685 [2024-04-15 16:10:48.474326] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.685 [2024-04-15 16:10:48.474334] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.685 [2024-04-15 16:10:48.474350] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.685 [2024-04-15 16:10:48.474399] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.685 [2024-04-15 16:10:48.474406] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.685 [2024-04-15 16:10:48.474411] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.685 [2024-04-15 16:10:48.474416] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.685 [2024-04-15 16:10:48.474427] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.685 [2024-04-15 16:10:48.474432] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.685 [2024-04-15 16:10:48.474437] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.685 [2024-04-15 16:10:48.474444] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.685 [2024-04-15 16:10:48.474460] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.685 [2024-04-15 16:10:48.474514] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.685 [2024-04-15 16:10:48.474521] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.685 [2024-04-15 16:10:48.474525] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.685 [2024-04-15 16:10:48.474530] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.685 [2024-04-15 16:10:48.474542] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.685 [2024-04-15 16:10:48.474547] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.685 [2024-04-15 16:10:48.474551] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.685 [2024-04-15 16:10:48.474559] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.685 [2024-04-15 16:10:48.474582] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.685 [2024-04-15 16:10:48.474627] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.685 [2024-04-15 16:10:48.474634] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.685 [2024-04-15 16:10:48.474638] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.685 [2024-04-15 16:10:48.474643] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.685 [2024-04-15 16:10:48.474655] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.685 [2024-04-15 16:10:48.474660] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.685 [2024-04-15 16:10:48.474664] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.685 [2024-04-15 16:10:48.474672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.685 [2024-04-15 16:10:48.474687] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.685 [2024-04-15 16:10:48.474735] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.685 [2024-04-15 16:10:48.474745] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.685 [2024-04-15 16:10:48.474750] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.685 [2024-04-15 16:10:48.474755] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.685 [2024-04-15 16:10:48.474767] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.685 [2024-04-15 16:10:48.474771] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.685 [2024-04-15 16:10:48.474776] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.685 [2024-04-15 16:10:48.474784] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.685 [2024-04-15 16:10:48.474799] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.685 [2024-04-15 16:10:48.474847] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.685 [2024-04-15 16:10:48.474854] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.685 [2024-04-15 16:10:48.474859] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.685 [2024-04-15 16:10:48.474864] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.685 [2024-04-15 16:10:48.474875] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.685 [2024-04-15 16:10:48.474880] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.685 [2024-04-15 16:10:48.474885] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.686 [2024-04-15 16:10:48.474892] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.686 [2024-04-15 16:10:48.474907] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.686 [2024-04-15 16:10:48.474956] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.686 [2024-04-15 16:10:48.474966] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.686 [2024-04-15 16:10:48.474971] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.686 [2024-04-15 16:10:48.474976] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.686 [2024-04-15 16:10:48.474988] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.686 [2024-04-15 16:10:48.474993] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.686 [2024-04-15 16:10:48.474997] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.686 [2024-04-15 16:10:48.475005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.686 [2024-04-15 16:10:48.475020] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.686 [2024-04-15 16:10:48.475071] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.686 [2024-04-15 16:10:48.475078] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.686 [2024-04-15 16:10:48.475083] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.686 [2024-04-15 16:10:48.475088] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.686 [2024-04-15 16:10:48.475100] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.686 [2024-04-15 16:10:48.475105] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.686 [2024-04-15 16:10:48.475109] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.686 [2024-04-15 16:10:48.475117] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.686 [2024-04-15 16:10:48.475131] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.686 [2024-04-15 16:10:48.475176] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.686 [2024-04-15 16:10:48.475183] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.686 [2024-04-15 16:10:48.475188] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.686 [2024-04-15 16:10:48.475192] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.686 [2024-04-15 16:10:48.475204] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.686 [2024-04-15 16:10:48.475209] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.686 [2024-04-15 16:10:48.475214] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.686 [2024-04-15 16:10:48.475221] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.686 [2024-04-15 16:10:48.475236] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.686 [2024-04-15 16:10:48.475287] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.686 [2024-04-15 16:10:48.475294] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.686 [2024-04-15 16:10:48.475299] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.686 [2024-04-15 16:10:48.475304] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.686 [2024-04-15 16:10:48.475315] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.686 [2024-04-15 16:10:48.475320] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.686 [2024-04-15 16:10:48.475325] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.686 [2024-04-15 16:10:48.475332] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.686 [2024-04-15 16:10:48.475347] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.686 [2024-04-15 16:10:48.475397] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.686 [2024-04-15 16:10:48.475407] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.686 [2024-04-15 16:10:48.475412] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.686 [2024-04-15 16:10:48.475417] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.686 [2024-04-15 16:10:48.475429] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.686 [2024-04-15 16:10:48.475434] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.686 [2024-04-15 16:10:48.475439] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.686 [2024-04-15 16:10:48.475447] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.686 [2024-04-15 16:10:48.475462] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.686 [2024-04-15 16:10:48.475513] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.686 [2024-04-15 16:10:48.475523] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.686 [2024-04-15 16:10:48.475527] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.686 [2024-04-15 16:10:48.475532] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.686 [2024-04-15 16:10:48.475544] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.686 [2024-04-15 16:10:48.475549] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.686 [2024-04-15 16:10:48.475553] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.686 [2024-04-15 16:10:48.475561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.686 [2024-04-15 16:10:48.488596] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.686 [2024-04-15 16:10:48.488641] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.686 [2024-04-15 16:10:48.488650] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.686 [2024-04-15 16:10:48.488655] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.686 [2024-04-15 16:10:48.488660] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.686 [2024-04-15 16:10:48.488681] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.686 [2024-04-15 16:10:48.488686] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.686 [2024-04-15 16:10:48.488691] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1eac990) 00:18:18.686 [2024-04-15 16:10:48.488702] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.686 [2024-04-15 16:10:48.488746] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ee3650, cid 3, qid 0 00:18:18.686 [2024-04-15 16:10:48.488803] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.686 [2024-04-15 16:10:48.488810] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.686 [2024-04-15 16:10:48.488814] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.686 [2024-04-15 16:10:48.488819] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ee3650) on tqpair=0x1eac990 00:18:18.686 [2024-04-15 16:10:48.488829] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 23 milliseconds 00:18:18.686 00:18:18.686 16:10:48 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:18:18.686 [2024-04-15 16:10:48.529566] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:18:18.686 [2024-04-15 16:10:48.529638] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86122 ] 00:18:18.951 [2024-04-15 16:10:48.666119] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:18:18.951 [2024-04-15 16:10:48.666193] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:18:18.951 [2024-04-15 16:10:48.666200] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:18:18.951 [2024-04-15 16:10:48.666214] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:18:18.952 [2024-04-15 16:10:48.666229] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:18:18.952 [2024-04-15 16:10:48.666384] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:18:18.952 [2024-04-15 16:10:48.666433] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x132b990 0 00:18:18.952 [2024-04-15 16:10:48.680617] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:18:18.952 [2024-04-15 16:10:48.680652] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:18:18.952 [2024-04-15 16:10:48.680659] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:18:18.952 [2024-04-15 16:10:48.680663] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:18:18.952 [2024-04-15 16:10:48.680723] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.952 [2024-04-15 16:10:48.680730] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.952 [2024-04-15 16:10:48.680735] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x132b990) 00:18:18.952 [2024-04-15 16:10:48.680752] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:18:18.952 [2024-04-15 16:10:48.680799] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362230, cid 0, qid 0 00:18:18.952 [2024-04-15 16:10:48.706604] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.952 [2024-04-15 16:10:48.706631] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.952 [2024-04-15 16:10:48.706636] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.952 [2024-04-15 16:10:48.706642] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362230) on tqpair=0x132b990 00:18:18.952 [2024-04-15 16:10:48.706658] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:18:18.952 [2024-04-15 16:10:48.706669] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:18:18.952 [2024-04-15 16:10:48.706677] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:18:18.952 [2024-04-15 16:10:48.706702] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.952 [2024-04-15 16:10:48.706707] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.952 [2024-04-15 16:10:48.706712] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x132b990) 00:18:18.952 [2024-04-15 16:10:48.706724] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.952 [2024-04-15 16:10:48.706764] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362230, cid 0, qid 0 00:18:18.952 [2024-04-15 16:10:48.706831] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.952 [2024-04-15 16:10:48.706838] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.952 [2024-04-15 16:10:48.706842] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.952 [2024-04-15 16:10:48.706847] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362230) on tqpair=0x132b990 00:18:18.952 [2024-04-15 16:10:48.706858] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:18:18.952 [2024-04-15 16:10:48.706867] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:18:18.952 [2024-04-15 16:10:48.706876] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.952 [2024-04-15 16:10:48.706881] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.952 [2024-04-15 16:10:48.706885] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x132b990) 00:18:18.952 [2024-04-15 16:10:48.706893] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.952 [2024-04-15 16:10:48.706910] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362230, cid 0, qid 0 00:18:18.952 [2024-04-15 16:10:48.706961] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.952 [2024-04-15 16:10:48.706968] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.952 [2024-04-15 16:10:48.706973] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.952 [2024-04-15 16:10:48.706978] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362230) on tqpair=0x132b990 00:18:18.952 [2024-04-15 16:10:48.706985] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:18:18.952 [2024-04-15 16:10:48.706995] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:18:18.952 [2024-04-15 16:10:48.707003] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.952 [2024-04-15 16:10:48.707007] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.952 [2024-04-15 16:10:48.707012] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x132b990) 00:18:18.952 [2024-04-15 16:10:48.707019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.952 [2024-04-15 16:10:48.707035] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362230, cid 0, qid 0 00:18:18.952 [2024-04-15 16:10:48.707080] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.952 [2024-04-15 16:10:48.707087] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.952 [2024-04-15 16:10:48.707091] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.952 [2024-04-15 16:10:48.707096] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362230) on tqpair=0x132b990 00:18:18.952 [2024-04-15 16:10:48.707103] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:18.952 [2024-04-15 16:10:48.707114] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.952 [2024-04-15 16:10:48.707119] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.952 [2024-04-15 16:10:48.707124] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x132b990) 00:18:18.952 [2024-04-15 16:10:48.707131] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.952 [2024-04-15 16:10:48.707146] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362230, cid 0, qid 0 00:18:18.952 [2024-04-15 16:10:48.707191] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.952 [2024-04-15 16:10:48.707202] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.952 [2024-04-15 16:10:48.707207] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.952 [2024-04-15 16:10:48.707212] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362230) on tqpair=0x132b990 00:18:18.952 [2024-04-15 16:10:48.707218] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:18:18.952 [2024-04-15 16:10:48.707225] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:18:18.952 [2024-04-15 16:10:48.707234] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:18.952 [2024-04-15 16:10:48.707341] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:18:18.952 [2024-04-15 16:10:48.707352] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:18.952 [2024-04-15 16:10:48.707362] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.952 [2024-04-15 16:10:48.707367] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.952 [2024-04-15 16:10:48.707372] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x132b990) 00:18:18.952 [2024-04-15 16:10:48.707379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.952 [2024-04-15 16:10:48.707396] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362230, cid 0, qid 0 00:18:18.952 [2024-04-15 16:10:48.707447] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.952 [2024-04-15 16:10:48.707454] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.952 [2024-04-15 16:10:48.707459] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.952 [2024-04-15 16:10:48.707463] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362230) on tqpair=0x132b990 00:18:18.952 [2024-04-15 16:10:48.707470] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:18.952 [2024-04-15 16:10:48.707481] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.952 [2024-04-15 16:10:48.707486] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.952 [2024-04-15 16:10:48.707490] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x132b990) 00:18:18.952 [2024-04-15 16:10:48.707498] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.952 [2024-04-15 16:10:48.707513] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362230, cid 0, qid 0 00:18:18.952 [2024-04-15 16:10:48.707558] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.952 [2024-04-15 16:10:48.707565] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.952 [2024-04-15 16:10:48.707569] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.952 [2024-04-15 16:10:48.707589] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362230) on tqpair=0x132b990 00:18:18.952 [2024-04-15 16:10:48.707596] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:18.952 [2024-04-15 16:10:48.707603] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:18:18.952 [2024-04-15 16:10:48.707612] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:18:18.952 [2024-04-15 16:10:48.707623] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:18:18.952 [2024-04-15 16:10:48.707635] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.952 [2024-04-15 16:10:48.707640] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x132b990) 00:18:18.953 [2024-04-15 16:10:48.707648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.953 [2024-04-15 16:10:48.707664] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362230, cid 0, qid 0 00:18:18.953 [2024-04-15 16:10:48.707755] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:18.953 [2024-04-15 16:10:48.707766] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:18.953 [2024-04-15 16:10:48.707771] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:18.953 [2024-04-15 16:10:48.707776] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x132b990): datao=0, datal=4096, cccid=0 00:18:18.953 [2024-04-15 16:10:48.707783] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1362230) on tqpair(0x132b990): expected_datao=0, payload_size=4096 00:18:18.953 [2024-04-15 16:10:48.707789] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.953 [2024-04-15 16:10:48.707799] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:18.953 [2024-04-15 16:10:48.707804] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:18.953 [2024-04-15 16:10:48.707814] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.953 [2024-04-15 16:10:48.707821] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.953 [2024-04-15 16:10:48.707825] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.953 [2024-04-15 16:10:48.707830] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362230) on tqpair=0x132b990 00:18:18.953 [2024-04-15 16:10:48.707841] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:18:18.953 [2024-04-15 16:10:48.707847] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:18:18.953 [2024-04-15 16:10:48.707853] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:18:18.953 [2024-04-15 16:10:48.707862] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:18:18.953 [2024-04-15 16:10:48.707868] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:18:18.953 [2024-04-15 16:10:48.707875] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:18:18.953 [2024-04-15 16:10:48.707885] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:18:18.953 [2024-04-15 16:10:48.707894] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.953 [2024-04-15 16:10:48.707899] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.953 [2024-04-15 16:10:48.707904] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x132b990) 00:18:18.953 [2024-04-15 16:10:48.707912] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:18.953 [2024-04-15 16:10:48.707929] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362230, cid 0, qid 0 00:18:18.953 [2024-04-15 16:10:48.707977] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.953 [2024-04-15 16:10:48.707984] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.953 [2024-04-15 16:10:48.707989] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.953 [2024-04-15 16:10:48.707994] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362230) on tqpair=0x132b990 00:18:18.953 [2024-04-15 16:10:48.708003] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.953 [2024-04-15 16:10:48.708008] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.953 [2024-04-15 16:10:48.708013] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x132b990) 00:18:18.953 [2024-04-15 16:10:48.708020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:18.953 [2024-04-15 16:10:48.708028] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.953 [2024-04-15 16:10:48.708033] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.953 [2024-04-15 16:10:48.708038] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x132b990) 00:18:18.953 [2024-04-15 16:10:48.708044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:18.953 [2024-04-15 16:10:48.708052] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.953 [2024-04-15 16:10:48.708056] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.953 [2024-04-15 16:10:48.708061] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x132b990) 00:18:18.953 [2024-04-15 16:10:48.708068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:18.953 [2024-04-15 16:10:48.708075] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.953 [2024-04-15 16:10:48.708080] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.953 [2024-04-15 16:10:48.708084] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.953 [2024-04-15 16:10:48.708091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:18.953 [2024-04-15 16:10:48.708097] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:18:18.953 [2024-04-15 16:10:48.708110] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:18.953 [2024-04-15 16:10:48.708118] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.953 [2024-04-15 16:10:48.708123] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x132b990) 00:18:18.953 [2024-04-15 16:10:48.708131] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.953 [2024-04-15 16:10:48.708148] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362230, cid 0, qid 0 00:18:18.953 [2024-04-15 16:10:48.708155] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362390, cid 1, qid 0 00:18:18.953 [2024-04-15 16:10:48.708160] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13624f0, cid 2, qid 0 00:18:18.953 [2024-04-15 16:10:48.708166] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.953 [2024-04-15 16:10:48.708172] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13627b0, cid 4, qid 0 00:18:18.953 [2024-04-15 16:10:48.708256] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.953 [2024-04-15 16:10:48.708263] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.953 [2024-04-15 16:10:48.708268] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.953 [2024-04-15 16:10:48.708273] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13627b0) on tqpair=0x132b990 00:18:18.953 [2024-04-15 16:10:48.708280] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:18:18.953 [2024-04-15 16:10:48.708287] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:18.953 [2024-04-15 16:10:48.708296] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:18:18.953 [2024-04-15 16:10:48.708304] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:18:18.953 [2024-04-15 16:10:48.708311] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.953 [2024-04-15 16:10:48.708316] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.953 [2024-04-15 16:10:48.708321] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x132b990) 00:18:18.953 [2024-04-15 16:10:48.708328] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:18.953 [2024-04-15 16:10:48.708344] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13627b0, cid 4, qid 0 00:18:18.953 [2024-04-15 16:10:48.708401] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.953 [2024-04-15 16:10:48.708409] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.953 [2024-04-15 16:10:48.708413] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.953 [2024-04-15 16:10:48.708418] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13627b0) on tqpair=0x132b990 00:18:18.953 [2024-04-15 16:10:48.708469] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:18:18.953 [2024-04-15 16:10:48.708483] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:18:18.953 [2024-04-15 16:10:48.708492] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.953 [2024-04-15 16:10:48.708497] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x132b990) 00:18:18.953 [2024-04-15 16:10:48.708505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.953 [2024-04-15 16:10:48.708521] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13627b0, cid 4, qid 0 00:18:18.953 [2024-04-15 16:10:48.708594] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:18.953 [2024-04-15 16:10:48.708601] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:18.953 [2024-04-15 16:10:48.708606] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:18.953 [2024-04-15 16:10:48.708611] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x132b990): datao=0, datal=4096, cccid=4 00:18:18.953 [2024-04-15 16:10:48.708617] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13627b0) on tqpair(0x132b990): expected_datao=0, payload_size=4096 00:18:18.953 [2024-04-15 16:10:48.708622] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.953 [2024-04-15 16:10:48.708631] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:18.953 [2024-04-15 16:10:48.708635] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:18.953 [2024-04-15 16:10:48.708645] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.953 [2024-04-15 16:10:48.708652] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.953 [2024-04-15 16:10:48.708656] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.953 [2024-04-15 16:10:48.708661] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13627b0) on tqpair=0x132b990 00:18:18.953 [2024-04-15 16:10:48.708672] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:18:18.953 [2024-04-15 16:10:48.708684] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:18:18.953 [2024-04-15 16:10:48.708694] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:18:18.953 [2024-04-15 16:10:48.708702] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.954 [2024-04-15 16:10:48.708707] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x132b990) 00:18:18.954 [2024-04-15 16:10:48.708715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.954 [2024-04-15 16:10:48.708731] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13627b0, cid 4, qid 0 00:18:18.954 [2024-04-15 16:10:48.708803] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:18.954 [2024-04-15 16:10:48.708813] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:18.954 [2024-04-15 16:10:48.708818] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:18.954 [2024-04-15 16:10:48.708823] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x132b990): datao=0, datal=4096, cccid=4 00:18:18.954 [2024-04-15 16:10:48.708829] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13627b0) on tqpair(0x132b990): expected_datao=0, payload_size=4096 00:18:18.954 [2024-04-15 16:10:48.708835] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.954 [2024-04-15 16:10:48.708843] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:18.954 [2024-04-15 16:10:48.708847] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:18.954 [2024-04-15 16:10:48.708857] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.954 [2024-04-15 16:10:48.708863] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.954 [2024-04-15 16:10:48.708868] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.954 [2024-04-15 16:10:48.708873] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13627b0) on tqpair=0x132b990 00:18:18.954 [2024-04-15 16:10:48.708890] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:18.954 [2024-04-15 16:10:48.708900] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:18.954 [2024-04-15 16:10:48.708909] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.954 [2024-04-15 16:10:48.708913] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x132b990) 00:18:18.954 [2024-04-15 16:10:48.708921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.954 [2024-04-15 16:10:48.708937] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13627b0, cid 4, qid 0 00:18:18.954 [2024-04-15 16:10:48.708989] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:18.954 [2024-04-15 16:10:48.708996] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:18.954 [2024-04-15 16:10:48.709001] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:18.954 [2024-04-15 16:10:48.709005] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x132b990): datao=0, datal=4096, cccid=4 00:18:18.954 [2024-04-15 16:10:48.709011] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13627b0) on tqpair(0x132b990): expected_datao=0, payload_size=4096 00:18:18.954 [2024-04-15 16:10:48.709017] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.954 [2024-04-15 16:10:48.709024] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:18.954 [2024-04-15 16:10:48.709029] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:18.954 [2024-04-15 16:10:48.709038] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.954 [2024-04-15 16:10:48.709044] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.954 [2024-04-15 16:10:48.709049] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.954 [2024-04-15 16:10:48.709054] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13627b0) on tqpair=0x132b990 00:18:18.954 [2024-04-15 16:10:48.709063] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:18.954 [2024-04-15 16:10:48.709073] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:18:18.954 [2024-04-15 16:10:48.709083] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:18:18.954 [2024-04-15 16:10:48.709091] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:18.954 [2024-04-15 16:10:48.709098] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:18:18.954 [2024-04-15 16:10:48.709105] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:18:18.954 [2024-04-15 16:10:48.709111] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:18:18.954 [2024-04-15 16:10:48.709118] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:18:18.954 [2024-04-15 16:10:48.709137] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.954 [2024-04-15 16:10:48.709142] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x132b990) 00:18:18.954 [2024-04-15 16:10:48.709149] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.954 [2024-04-15 16:10:48.709158] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.954 [2024-04-15 16:10:48.709163] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.954 [2024-04-15 16:10:48.709167] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x132b990) 00:18:18.954 [2024-04-15 16:10:48.709174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:18:18.954 [2024-04-15 16:10:48.709195] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13627b0, cid 4, qid 0 00:18:18.954 [2024-04-15 16:10:48.709202] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362910, cid 5, qid 0 00:18:18.954 [2024-04-15 16:10:48.709271] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.954 [2024-04-15 16:10:48.709281] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.954 [2024-04-15 16:10:48.709286] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.954 [2024-04-15 16:10:48.709291] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13627b0) on tqpair=0x132b990 00:18:18.954 [2024-04-15 16:10:48.709300] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.954 [2024-04-15 16:10:48.709307] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.954 [2024-04-15 16:10:48.709311] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.954 [2024-04-15 16:10:48.709316] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362910) on tqpair=0x132b990 00:18:18.954 [2024-04-15 16:10:48.709328] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.954 [2024-04-15 16:10:48.709333] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x132b990) 00:18:18.954 [2024-04-15 16:10:48.709349] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.954 [2024-04-15 16:10:48.709365] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362910, cid 5, qid 0 00:18:18.954 [2024-04-15 16:10:48.709411] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.954 [2024-04-15 16:10:48.709418] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.954 [2024-04-15 16:10:48.709422] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.954 [2024-04-15 16:10:48.709427] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362910) on tqpair=0x132b990 00:18:18.954 [2024-04-15 16:10:48.709439] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.954 [2024-04-15 16:10:48.709444] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x132b990) 00:18:18.954 [2024-04-15 16:10:48.709451] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.954 [2024-04-15 16:10:48.709466] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362910, cid 5, qid 0 00:18:18.954 [2024-04-15 16:10:48.709525] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.954 [2024-04-15 16:10:48.709532] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.954 [2024-04-15 16:10:48.709536] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.954 [2024-04-15 16:10:48.709541] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362910) on tqpair=0x132b990 00:18:18.954 [2024-04-15 16:10:48.709553] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.954 [2024-04-15 16:10:48.709558] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x132b990) 00:18:18.954 [2024-04-15 16:10:48.709565] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.954 [2024-04-15 16:10:48.709590] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362910, cid 5, qid 0 00:18:18.954 [2024-04-15 16:10:48.709640] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.954 [2024-04-15 16:10:48.709650] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.954 [2024-04-15 16:10:48.709654] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.954 [2024-04-15 16:10:48.709659] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362910) on tqpair=0x132b990 00:18:18.954 [2024-04-15 16:10:48.709674] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.954 [2024-04-15 16:10:48.709679] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x132b990) 00:18:18.954 [2024-04-15 16:10:48.709687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.954 [2024-04-15 16:10:48.709695] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.954 [2024-04-15 16:10:48.709700] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x132b990) 00:18:18.954 [2024-04-15 16:10:48.709707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.954 [2024-04-15 16:10:48.709715] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.954 [2024-04-15 16:10:48.709720] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x132b990) 00:18:18.954 [2024-04-15 16:10:48.709727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.954 [2024-04-15 16:10:48.709736] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.954 [2024-04-15 16:10:48.709741] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x132b990) 00:18:18.955 [2024-04-15 16:10:48.709748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.955 [2024-04-15 16:10:48.709765] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362910, cid 5, qid 0 00:18:18.955 [2024-04-15 16:10:48.709771] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13627b0, cid 4, qid 0 00:18:18.955 [2024-04-15 16:10:48.709777] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362a70, cid 6, qid 0 00:18:18.955 [2024-04-15 16:10:48.709782] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362bd0, cid 7, qid 0 00:18:18.955 [2024-04-15 16:10:48.709911] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:18.955 [2024-04-15 16:10:48.709926] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:18.955 [2024-04-15 16:10:48.709931] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:18.955 [2024-04-15 16:10:48.709935] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x132b990): datao=0, datal=8192, cccid=5 00:18:18.955 [2024-04-15 16:10:48.709941] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1362910) on tqpair(0x132b990): expected_datao=0, payload_size=8192 00:18:18.955 [2024-04-15 16:10:48.709947] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.955 [2024-04-15 16:10:48.709965] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:18.955 [2024-04-15 16:10:48.709970] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:18.955 [2024-04-15 16:10:48.709977] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:18.955 [2024-04-15 16:10:48.709983] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:18.955 [2024-04-15 16:10:48.709988] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:18.955 [2024-04-15 16:10:48.709993] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x132b990): datao=0, datal=512, cccid=4 00:18:18.955 [2024-04-15 16:10:48.709998] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13627b0) on tqpair(0x132b990): expected_datao=0, payload_size=512 00:18:18.955 [2024-04-15 16:10:48.710004] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.955 [2024-04-15 16:10:48.710011] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:18.955 [2024-04-15 16:10:48.710016] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:18.955 [2024-04-15 16:10:48.710022] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:18.955 [2024-04-15 16:10:48.710029] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:18.955 [2024-04-15 16:10:48.710033] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:18.955 [2024-04-15 16:10:48.710038] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x132b990): datao=0, datal=512, cccid=6 00:18:18.955 [2024-04-15 16:10:48.710044] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1362a70) on tqpair(0x132b990): expected_datao=0, payload_size=512 00:18:18.955 [2024-04-15 16:10:48.710049] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.955 [2024-04-15 16:10:48.710056] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:18.955 [2024-04-15 16:10:48.710061] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:18.955 [2024-04-15 16:10:48.710068] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:18.955 [2024-04-15 16:10:48.710075] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:18.955 [2024-04-15 16:10:48.710079] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:18.955 [2024-04-15 16:10:48.710084] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x132b990): datao=0, datal=4096, cccid=7 00:18:18.955 [2024-04-15 16:10:48.710089] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1362bd0) on tqpair(0x132b990): expected_datao=0, payload_size=4096 00:18:18.955 [2024-04-15 16:10:48.710095] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.955 [2024-04-15 16:10:48.710103] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:18.955 [2024-04-15 16:10:48.710108] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:18.955 [2024-04-15 16:10:48.710114] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.955 [2024-04-15 16:10:48.710121] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.955 [2024-04-15 16:10:48.710125] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.955 [2024-04-15 16:10:48.710130] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362910) on tqpair=0x132b990 00:18:18.955 [2024-04-15 16:10:48.710149] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.955 [2024-04-15 16:10:48.710156] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.955 [2024-04-15 16:10:48.710160] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.955 [2024-04-15 16:10:48.710165] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13627b0) on tqpair=0x132b990 00:18:18.955 [2024-04-15 16:10:48.710181] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.955 [2024-04-15 16:10:48.710188] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.955 [2024-04-15 16:10:48.710192] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.955 [2024-04-15 16:10:48.710197] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362a70) on tqpair=0x132b990 00:18:18.955 ===================================================== 00:18:18.955 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:18.955 ===================================================== 00:18:18.955 Controller Capabilities/Features 00:18:18.955 ================================ 00:18:18.955 Vendor ID: 8086 00:18:18.955 Subsystem Vendor ID: 8086 00:18:18.955 Serial Number: SPDK00000000000001 00:18:18.955 Model Number: SPDK bdev Controller 00:18:18.955 Firmware Version: 24.05 00:18:18.955 Recommended Arb Burst: 6 00:18:18.955 IEEE OUI Identifier: e4 d2 5c 00:18:18.955 Multi-path I/O 00:18:18.955 May have multiple subsystem ports: Yes 00:18:18.955 May have multiple controllers: Yes 00:18:18.955 Associated with SR-IOV VF: No 00:18:18.955 Max Data Transfer Size: 131072 00:18:18.955 Max Number of Namespaces: 32 00:18:18.955 Max Number of I/O Queues: 127 00:18:18.955 NVMe Specification Version (VS): 1.3 00:18:18.955 NVMe Specification Version (Identify): 1.3 00:18:18.955 Maximum Queue Entries: 128 00:18:18.955 Contiguous Queues Required: Yes 00:18:18.955 Arbitration Mechanisms Supported 00:18:18.955 Weighted Round Robin: Not Supported 00:18:18.955 Vendor Specific: Not Supported 00:18:18.955 Reset Timeout: 15000 ms 00:18:18.955 Doorbell Stride: 4 bytes 00:18:18.955 NVM Subsystem Reset: Not Supported 00:18:18.955 Command Sets Supported 00:18:18.955 NVM Command Set: Supported 00:18:18.955 Boot Partition: Not Supported 00:18:18.955 Memory Page Size Minimum: 4096 bytes 00:18:18.955 Memory Page Size Maximum: 4096 bytes 00:18:18.955 Persistent Memory Region: Not Supported 00:18:18.955 Optional Asynchronous Events Supported 00:18:18.955 Namespace Attribute Notices: Supported 00:18:18.955 Firmware Activation Notices: Not Supported 00:18:18.955 ANA Change Notices: Not Supported 00:18:18.955 PLE Aggregate Log Change Notices: Not Supported 00:18:18.955 LBA Status Info Alert Notices: Not Supported 00:18:18.955 EGE Aggregate Log Change Notices: Not Supported 00:18:18.955 Normal NVM Subsystem Shutdown event: Not Supported 00:18:18.955 Zone Descriptor Change Notices: Not Supported 00:18:18.955 Discovery Log Change Notices: Not Supported 00:18:18.955 Controller Attributes 00:18:18.955 128-bit Host Identifier: Supported 00:18:18.955 Non-Operational Permissive Mode: Not Supported 00:18:18.955 NVM Sets: Not Supported 00:18:18.955 Read Recovery Levels: Not Supported 00:18:18.955 Endurance Groups: Not Supported 00:18:18.955 Predictable Latency Mode: Not Supported 00:18:18.955 Traffic Based Keep ALive: Not Supported 00:18:18.955 Namespace Granularity: Not Supported 00:18:18.955 SQ Associations: Not Supported 00:18:18.955 UUID List: Not Supported 00:18:18.955 Multi-Domain Subsystem: Not Supported 00:18:18.955 Fixed Capacity Management: Not Supported 00:18:18.955 Variable Capacity Management: Not Supported 00:18:18.955 Delete Endurance Group: Not Supported 00:18:18.955 Delete NVM Set: Not Supported 00:18:18.955 Extended LBA Formats Supported: Not Supported 00:18:18.955 Flexible Data Placement Supported: Not Supported 00:18:18.955 00:18:18.955 Controller Memory Buffer Support 00:18:18.955 ================================ 00:18:18.955 Supported: No 00:18:18.955 00:18:18.955 Persistent Memory Region Support 00:18:18.955 ================================ 00:18:18.955 Supported: No 00:18:18.955 00:18:18.955 Admin Command Set Attributes 00:18:18.955 ============================ 00:18:18.955 Security Send/Receive: Not Supported 00:18:18.955 Format NVM: Not Supported 00:18:18.955 Firmware Activate/Download: Not Supported 00:18:18.955 Namespace Management: Not Supported 00:18:18.955 Device Self-Test: Not Supported 00:18:18.955 Directives: Not Supported 00:18:18.955 NVMe-MI: Not Supported 00:18:18.955 Virtualization Management: Not Supported 00:18:18.955 Doorbell Buffer Config: Not Supported 00:18:18.955 Get LBA Status Capability: Not Supported 00:18:18.955 Command & Feature Lockdown Capability: Not Supported 00:18:18.955 Abort Command Limit: 4 00:18:18.955 Async Event Request Limit: 4 00:18:18.955 Number of Firmware Slots: N/A 00:18:18.955 Firmware Slot 1 Read-Only: N/A 00:18:18.955 Firmware Activation Without Reset: [2024-04-15 16:10:48.710207] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.956 [2024-04-15 16:10:48.710213] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.956 [2024-04-15 16:10:48.710218] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.956 [2024-04-15 16:10:48.710223] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362bd0) on tqpair=0x132b990 00:18:18.956 N/A 00:18:18.956 Multiple Update Detection Support: N/A 00:18:18.956 Firmware Update Granularity: No Information Provided 00:18:18.956 Per-Namespace SMART Log: No 00:18:18.956 Asymmetric Namespace Access Log Page: Not Supported 00:18:18.956 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:18:18.956 Command Effects Log Page: Supported 00:18:18.956 Get Log Page Extended Data: Supported 00:18:18.956 Telemetry Log Pages: Not Supported 00:18:18.956 Persistent Event Log Pages: Not Supported 00:18:18.956 Supported Log Pages Log Page: May Support 00:18:18.956 Commands Supported & Effects Log Page: Not Supported 00:18:18.956 Feature Identifiers & Effects Log Page:May Support 00:18:18.956 NVMe-MI Commands & Effects Log Page: May Support 00:18:18.956 Data Area 4 for Telemetry Log: Not Supported 00:18:18.956 Error Log Page Entries Supported: 128 00:18:18.956 Keep Alive: Supported 00:18:18.956 Keep Alive Granularity: 10000 ms 00:18:18.956 00:18:18.956 NVM Command Set Attributes 00:18:18.956 ========================== 00:18:18.956 Submission Queue Entry Size 00:18:18.956 Max: 64 00:18:18.956 Min: 64 00:18:18.956 Completion Queue Entry Size 00:18:18.956 Max: 16 00:18:18.956 Min: 16 00:18:18.956 Number of Namespaces: 32 00:18:18.956 Compare Command: Supported 00:18:18.956 Write Uncorrectable Command: Not Supported 00:18:18.956 Dataset Management Command: Supported 00:18:18.956 Write Zeroes Command: Supported 00:18:18.956 Set Features Save Field: Not Supported 00:18:18.956 Reservations: Supported 00:18:18.956 Timestamp: Not Supported 00:18:18.956 Copy: Supported 00:18:18.956 Volatile Write Cache: Present 00:18:18.956 Atomic Write Unit (Normal): 1 00:18:18.956 Atomic Write Unit (PFail): 1 00:18:18.956 Atomic Compare & Write Unit: 1 00:18:18.956 Fused Compare & Write: Supported 00:18:18.956 Scatter-Gather List 00:18:18.956 SGL Command Set: Supported 00:18:18.956 SGL Keyed: Supported 00:18:18.956 SGL Bit Bucket Descriptor: Not Supported 00:18:18.956 SGL Metadata Pointer: Not Supported 00:18:18.956 Oversized SGL: Not Supported 00:18:18.956 SGL Metadata Address: Not Supported 00:18:18.956 SGL Offset: Supported 00:18:18.956 Transport SGL Data Block: Not Supported 00:18:18.956 Replay Protected Memory Block: Not Supported 00:18:18.956 00:18:18.956 Firmware Slot Information 00:18:18.956 ========================= 00:18:18.956 Active slot: 1 00:18:18.956 Slot 1 Firmware Revision: 24.05 00:18:18.956 00:18:18.956 00:18:18.956 Commands Supported and Effects 00:18:18.956 ============================== 00:18:18.956 Admin Commands 00:18:18.956 -------------- 00:18:18.956 Get Log Page (02h): Supported 00:18:18.956 Identify (06h): Supported 00:18:18.956 Abort (08h): Supported 00:18:18.956 Set Features (09h): Supported 00:18:18.956 Get Features (0Ah): Supported 00:18:18.956 Asynchronous Event Request (0Ch): Supported 00:18:18.956 Keep Alive (18h): Supported 00:18:18.956 I/O Commands 00:18:18.956 ------------ 00:18:18.956 Flush (00h): Supported LBA-Change 00:18:18.956 Write (01h): Supported LBA-Change 00:18:18.956 Read (02h): Supported 00:18:18.956 Compare (05h): Supported 00:18:18.956 Write Zeroes (08h): Supported LBA-Change 00:18:18.956 Dataset Management (09h): Supported LBA-Change 00:18:18.956 Copy (19h): Supported LBA-Change 00:18:18.956 Unknown (79h): Supported LBA-Change 00:18:18.956 Unknown (7Ah): Supported 00:18:18.956 00:18:18.956 Error Log 00:18:18.956 ========= 00:18:18.956 00:18:18.956 Arbitration 00:18:18.956 =========== 00:18:18.956 Arbitration Burst: 1 00:18:18.956 00:18:18.956 Power Management 00:18:18.956 ================ 00:18:18.956 Number of Power States: 1 00:18:18.956 Current Power State: Power State #0 00:18:18.956 Power State #0: 00:18:18.956 Max Power: 0.00 W 00:18:18.956 Non-Operational State: Operational 00:18:18.956 Entry Latency: Not Reported 00:18:18.956 Exit Latency: Not Reported 00:18:18.956 Relative Read Throughput: 0 00:18:18.956 Relative Read Latency: 0 00:18:18.956 Relative Write Throughput: 0 00:18:18.956 Relative Write Latency: 0 00:18:18.956 Idle Power: Not Reported 00:18:18.956 Active Power: Not Reported 00:18:18.956 Non-Operational Permissive Mode: Not Supported 00:18:18.956 00:18:18.956 Health Information 00:18:18.956 ================== 00:18:18.956 Critical Warnings: 00:18:18.956 Available Spare Space: OK 00:18:18.956 Temperature: OK 00:18:18.956 Device Reliability: OK 00:18:18.956 Read Only: No 00:18:18.956 Volatile Memory Backup: OK 00:18:18.956 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:18.956 Temperature Threshold: [2024-04-15 16:10:48.710342] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.956 [2024-04-15 16:10:48.710348] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x132b990) 00:18:18.956 [2024-04-15 16:10:48.710356] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.956 [2024-04-15 16:10:48.710376] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362bd0, cid 7, qid 0 00:18:18.956 [2024-04-15 16:10:48.710426] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.956 [2024-04-15 16:10:48.710433] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.956 [2024-04-15 16:10:48.710437] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.956 [2024-04-15 16:10:48.710442] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362bd0) on tqpair=0x132b990 00:18:18.956 [2024-04-15 16:10:48.710477] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:18:18.956 [2024-04-15 16:10:48.710490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.956 [2024-04-15 16:10:48.710499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.956 [2024-04-15 16:10:48.710506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.956 [2024-04-15 16:10:48.710514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.956 [2024-04-15 16:10:48.710523] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.956 [2024-04-15 16:10:48.710528] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.956 [2024-04-15 16:10:48.710533] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.956 [2024-04-15 16:10:48.710540] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.956 [2024-04-15 16:10:48.710558] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.956 [2024-04-15 16:10:48.710613] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.956 [2024-04-15 16:10:48.710621] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.956 [2024-04-15 16:10:48.710626] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.956 [2024-04-15 16:10:48.710630] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.957 [2024-04-15 16:10:48.710639] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.957 [2024-04-15 16:10:48.710644] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.957 [2024-04-15 16:10:48.710649] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.957 [2024-04-15 16:10:48.710656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.957 [2024-04-15 16:10:48.710675] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.957 [2024-04-15 16:10:48.710746] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.957 [2024-04-15 16:10:48.710753] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.957 [2024-04-15 16:10:48.710758] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.957 [2024-04-15 16:10:48.710762] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.957 [2024-04-15 16:10:48.710769] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:18:18.957 [2024-04-15 16:10:48.710776] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:18:18.957 [2024-04-15 16:10:48.710786] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.957 [2024-04-15 16:10:48.710791] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.957 [2024-04-15 16:10:48.710796] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.957 [2024-04-15 16:10:48.710803] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.957 [2024-04-15 16:10:48.710819] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.957 [2024-04-15 16:10:48.710863] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.957 [2024-04-15 16:10:48.710870] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.957 [2024-04-15 16:10:48.710875] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.957 [2024-04-15 16:10:48.710880] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.957 [2024-04-15 16:10:48.710892] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.957 [2024-04-15 16:10:48.710897] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.957 [2024-04-15 16:10:48.710902] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.957 [2024-04-15 16:10:48.710909] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.957 [2024-04-15 16:10:48.710924] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.957 [2024-04-15 16:10:48.710978] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.957 [2024-04-15 16:10:48.710985] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.957 [2024-04-15 16:10:48.710989] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.957 [2024-04-15 16:10:48.710994] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.957 [2024-04-15 16:10:48.711006] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.957 [2024-04-15 16:10:48.711011] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.957 [2024-04-15 16:10:48.711015] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.957 [2024-04-15 16:10:48.711023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.957 [2024-04-15 16:10:48.711038] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.957 [2024-04-15 16:10:48.711082] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.957 [2024-04-15 16:10:48.711089] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.957 [2024-04-15 16:10:48.711093] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.957 [2024-04-15 16:10:48.711098] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.957 [2024-04-15 16:10:48.711110] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.957 [2024-04-15 16:10:48.711115] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.957 [2024-04-15 16:10:48.711119] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.957 [2024-04-15 16:10:48.711127] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.957 [2024-04-15 16:10:48.711142] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.957 [2024-04-15 16:10:48.711190] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.957 [2024-04-15 16:10:48.711197] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.957 [2024-04-15 16:10:48.711202] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.957 [2024-04-15 16:10:48.711207] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.957 [2024-04-15 16:10:48.711218] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.957 [2024-04-15 16:10:48.711223] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.957 [2024-04-15 16:10:48.711228] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.957 [2024-04-15 16:10:48.711235] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.957 [2024-04-15 16:10:48.711250] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.957 [2024-04-15 16:10:48.711301] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.957 [2024-04-15 16:10:48.711308] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.957 [2024-04-15 16:10:48.711312] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.957 [2024-04-15 16:10:48.711317] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.957 [2024-04-15 16:10:48.711329] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.957 [2024-04-15 16:10:48.711334] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.957 [2024-04-15 16:10:48.711339] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.957 [2024-04-15 16:10:48.711346] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.957 [2024-04-15 16:10:48.711361] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.957 [2024-04-15 16:10:48.711414] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.957 [2024-04-15 16:10:48.711422] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.957 [2024-04-15 16:10:48.711426] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.957 [2024-04-15 16:10:48.711431] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.957 [2024-04-15 16:10:48.711442] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.957 [2024-04-15 16:10:48.711447] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.957 [2024-04-15 16:10:48.711452] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.957 [2024-04-15 16:10:48.711459] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.957 [2024-04-15 16:10:48.711474] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.957 [2024-04-15 16:10:48.711534] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.957 [2024-04-15 16:10:48.711541] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.957 [2024-04-15 16:10:48.711546] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.957 [2024-04-15 16:10:48.711551] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.957 [2024-04-15 16:10:48.711562] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.957 [2024-04-15 16:10:48.711567] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.957 [2024-04-15 16:10:48.711572] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.957 [2024-04-15 16:10:48.711588] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.957 [2024-04-15 16:10:48.711603] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.957 [2024-04-15 16:10:48.711648] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.957 [2024-04-15 16:10:48.711655] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.957 [2024-04-15 16:10:48.711660] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.957 [2024-04-15 16:10:48.711665] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.957 [2024-04-15 16:10:48.711676] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.957 [2024-04-15 16:10:48.711681] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.957 [2024-04-15 16:10:48.711686] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.957 [2024-04-15 16:10:48.711693] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.957 [2024-04-15 16:10:48.711708] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.957 [2024-04-15 16:10:48.711750] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.957 [2024-04-15 16:10:48.711757] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.957 [2024-04-15 16:10:48.711762] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.957 [2024-04-15 16:10:48.711767] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.957 [2024-04-15 16:10:48.711778] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.957 [2024-04-15 16:10:48.711783] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.957 [2024-04-15 16:10:48.711788] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.957 [2024-04-15 16:10:48.711795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.957 [2024-04-15 16:10:48.711810] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.957 [2024-04-15 16:10:48.711861] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.957 [2024-04-15 16:10:48.711868] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.957 [2024-04-15 16:10:48.711873] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.957 [2024-04-15 16:10:48.711878] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.958 [2024-04-15 16:10:48.711889] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.958 [2024-04-15 16:10:48.711894] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.958 [2024-04-15 16:10:48.711899] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.958 [2024-04-15 16:10:48.711907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.958 [2024-04-15 16:10:48.711921] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.958 [2024-04-15 16:10:48.711969] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.958 [2024-04-15 16:10:48.711976] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.958 [2024-04-15 16:10:48.711981] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.958 [2024-04-15 16:10:48.711986] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.958 [2024-04-15 16:10:48.711997] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.958 [2024-04-15 16:10:48.712002] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.958 [2024-04-15 16:10:48.712007] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.958 [2024-04-15 16:10:48.712014] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.958 [2024-04-15 16:10:48.712029] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.958 [2024-04-15 16:10:48.712081] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.958 [2024-04-15 16:10:48.712089] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.958 [2024-04-15 16:10:48.712094] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.958 [2024-04-15 16:10:48.712099] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.958 [2024-04-15 16:10:48.712110] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.958 [2024-04-15 16:10:48.712116] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.958 [2024-04-15 16:10:48.712120] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.958 [2024-04-15 16:10:48.712128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.958 [2024-04-15 16:10:48.712142] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.958 [2024-04-15 16:10:48.712253] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.958 [2024-04-15 16:10:48.712268] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.958 [2024-04-15 16:10:48.712273] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.958 [2024-04-15 16:10:48.712278] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.958 [2024-04-15 16:10:48.712290] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.958 [2024-04-15 16:10:48.712295] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.958 [2024-04-15 16:10:48.712299] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.958 [2024-04-15 16:10:48.712307] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.958 [2024-04-15 16:10:48.712322] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.958 [2024-04-15 16:10:48.712373] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.958 [2024-04-15 16:10:48.712380] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.958 [2024-04-15 16:10:48.712384] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.958 [2024-04-15 16:10:48.712389] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.958 [2024-04-15 16:10:48.712401] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.958 [2024-04-15 16:10:48.712406] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.958 [2024-04-15 16:10:48.712410] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.958 [2024-04-15 16:10:48.712418] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.958 [2024-04-15 16:10:48.712432] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.958 [2024-04-15 16:10:48.712480] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.958 [2024-04-15 16:10:48.712487] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.958 [2024-04-15 16:10:48.712492] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.958 [2024-04-15 16:10:48.712497] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.958 [2024-04-15 16:10:48.712508] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.958 [2024-04-15 16:10:48.712513] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.958 [2024-04-15 16:10:48.712518] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.958 [2024-04-15 16:10:48.712525] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.958 [2024-04-15 16:10:48.712540] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.958 [2024-04-15 16:10:48.712596] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.958 [2024-04-15 16:10:48.712603] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.958 [2024-04-15 16:10:48.712608] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.958 [2024-04-15 16:10:48.712613] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.958 [2024-04-15 16:10:48.712624] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.958 [2024-04-15 16:10:48.712629] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.958 [2024-04-15 16:10:48.712634] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.958 [2024-04-15 16:10:48.712641] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.958 [2024-04-15 16:10:48.712657] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.958 [2024-04-15 16:10:48.712704] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.958 [2024-04-15 16:10:48.712712] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.958 [2024-04-15 16:10:48.712716] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.958 [2024-04-15 16:10:48.712721] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.958 [2024-04-15 16:10:48.712732] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.958 [2024-04-15 16:10:48.712737] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.958 [2024-04-15 16:10:48.712742] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.958 [2024-04-15 16:10:48.712749] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.958 [2024-04-15 16:10:48.712764] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.958 [2024-04-15 16:10:48.712807] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.958 [2024-04-15 16:10:48.712817] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.958 [2024-04-15 16:10:48.712822] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.958 [2024-04-15 16:10:48.712827] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.959 [2024-04-15 16:10:48.712838] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.959 [2024-04-15 16:10:48.712843] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.959 [2024-04-15 16:10:48.712848] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.959 [2024-04-15 16:10:48.712855] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.959 [2024-04-15 16:10:48.712871] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.959 [2024-04-15 16:10:48.712915] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.959 [2024-04-15 16:10:48.712922] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.959 [2024-04-15 16:10:48.712927] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.959 [2024-04-15 16:10:48.712932] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.959 [2024-04-15 16:10:48.712943] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.959 [2024-04-15 16:10:48.712948] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.959 [2024-04-15 16:10:48.712953] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.959 [2024-04-15 16:10:48.712960] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.959 [2024-04-15 16:10:48.712976] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.959 [2024-04-15 16:10:48.713018] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.959 [2024-04-15 16:10:48.713025] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.959 [2024-04-15 16:10:48.713029] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.959 [2024-04-15 16:10:48.713034] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.959 [2024-04-15 16:10:48.713045] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.959 [2024-04-15 16:10:48.713050] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.959 [2024-04-15 16:10:48.713055] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.959 [2024-04-15 16:10:48.713062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.959 [2024-04-15 16:10:48.713077] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.959 [2024-04-15 16:10:48.713122] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.959 [2024-04-15 16:10:48.713129] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.959 [2024-04-15 16:10:48.713133] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.959 [2024-04-15 16:10:48.713138] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.959 [2024-04-15 16:10:48.713150] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.959 [2024-04-15 16:10:48.713155] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.959 [2024-04-15 16:10:48.713159] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.959 [2024-04-15 16:10:48.713167] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.959 [2024-04-15 16:10:48.713182] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.959 [2024-04-15 16:10:48.713229] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.959 [2024-04-15 16:10:48.713236] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.959 [2024-04-15 16:10:48.713241] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.959 [2024-04-15 16:10:48.713246] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.959 [2024-04-15 16:10:48.713258] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.959 [2024-04-15 16:10:48.713263] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.959 [2024-04-15 16:10:48.713267] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.959 [2024-04-15 16:10:48.713275] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.959 [2024-04-15 16:10:48.713289] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.959 [2024-04-15 16:10:48.713353] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.959 [2024-04-15 16:10:48.713360] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.959 [2024-04-15 16:10:48.713365] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.959 [2024-04-15 16:10:48.713370] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.959 [2024-04-15 16:10:48.713382] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.959 [2024-04-15 16:10:48.713387] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.959 [2024-04-15 16:10:48.713391] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.959 [2024-04-15 16:10:48.713399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.959 [2024-04-15 16:10:48.713414] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.959 [2024-04-15 16:10:48.713468] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.959 [2024-04-15 16:10:48.713475] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.959 [2024-04-15 16:10:48.713480] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.959 [2024-04-15 16:10:48.713484] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.959 [2024-04-15 16:10:48.713496] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.959 [2024-04-15 16:10:48.713501] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.959 [2024-04-15 16:10:48.713506] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.959 [2024-04-15 16:10:48.713513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.959 [2024-04-15 16:10:48.713528] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.959 [2024-04-15 16:10:48.713586] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.959 [2024-04-15 16:10:48.713594] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.959 [2024-04-15 16:10:48.713598] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.959 [2024-04-15 16:10:48.713603] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.959 [2024-04-15 16:10:48.713615] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.959 [2024-04-15 16:10:48.713620] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.959 [2024-04-15 16:10:48.713624] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.959 [2024-04-15 16:10:48.713632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.959 [2024-04-15 16:10:48.713648] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.959 [2024-04-15 16:10:48.713693] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.959 [2024-04-15 16:10:48.713701] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.959 [2024-04-15 16:10:48.713705] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.959 [2024-04-15 16:10:48.713710] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.959 [2024-04-15 16:10:48.713722] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.959 [2024-04-15 16:10:48.713727] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.959 [2024-04-15 16:10:48.713731] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.959 [2024-04-15 16:10:48.713739] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.959 [2024-04-15 16:10:48.713754] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.959 [2024-04-15 16:10:48.713802] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.959 [2024-04-15 16:10:48.713812] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.959 [2024-04-15 16:10:48.713817] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.959 [2024-04-15 16:10:48.713822] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.959 [2024-04-15 16:10:48.713834] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.959 [2024-04-15 16:10:48.713839] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.959 [2024-04-15 16:10:48.713843] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.959 [2024-04-15 16:10:48.713851] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.959 [2024-04-15 16:10:48.713866] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.959 [2024-04-15 16:10:48.713917] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.959 [2024-04-15 16:10:48.713924] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.959 [2024-04-15 16:10:48.713929] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.959 [2024-04-15 16:10:48.713934] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.959 [2024-04-15 16:10:48.713945] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.959 [2024-04-15 16:10:48.713950] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.959 [2024-04-15 16:10:48.713955] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.959 [2024-04-15 16:10:48.713962] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.959 [2024-04-15 16:10:48.713977] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.959 [2024-04-15 16:10:48.714022] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.959 [2024-04-15 16:10:48.714029] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.959 [2024-04-15 16:10:48.714034] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.959 [2024-04-15 16:10:48.714038] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.959 [2024-04-15 16:10:48.714050] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.959 [2024-04-15 16:10:48.714055] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.959 [2024-04-15 16:10:48.714059] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.959 [2024-04-15 16:10:48.714067] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.959 [2024-04-15 16:10:48.714082] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.959 [2024-04-15 16:10:48.714124] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.959 [2024-04-15 16:10:48.714131] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.960 [2024-04-15 16:10:48.714136] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.960 [2024-04-15 16:10:48.714140] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.960 [2024-04-15 16:10:48.714152] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.960 [2024-04-15 16:10:48.714157] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.960 [2024-04-15 16:10:48.714161] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.960 [2024-04-15 16:10:48.714169] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.960 [2024-04-15 16:10:48.714184] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.960 [2024-04-15 16:10:48.714235] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.960 [2024-04-15 16:10:48.714242] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.960 [2024-04-15 16:10:48.714247] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.960 [2024-04-15 16:10:48.714252] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.960 [2024-04-15 16:10:48.714263] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.960 [2024-04-15 16:10:48.714268] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.960 [2024-04-15 16:10:48.714273] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.960 [2024-04-15 16:10:48.714280] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.960 [2024-04-15 16:10:48.714295] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.960 [2024-04-15 16:10:48.714342] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.960 [2024-04-15 16:10:48.714350] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.960 [2024-04-15 16:10:48.714355] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.960 [2024-04-15 16:10:48.714360] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.960 [2024-04-15 16:10:48.714372] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.960 [2024-04-15 16:10:48.714377] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.960 [2024-04-15 16:10:48.714382] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.960 [2024-04-15 16:10:48.714389] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.960 [2024-04-15 16:10:48.714405] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.960 [2024-04-15 16:10:48.714451] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.960 [2024-04-15 16:10:48.714458] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.960 [2024-04-15 16:10:48.714463] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.960 [2024-04-15 16:10:48.714468] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.960 [2024-04-15 16:10:48.714479] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.960 [2024-04-15 16:10:48.714484] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.960 [2024-04-15 16:10:48.714489] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.960 [2024-04-15 16:10:48.714496] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.960 [2024-04-15 16:10:48.714512] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.960 [2024-04-15 16:10:48.714557] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.960 [2024-04-15 16:10:48.714564] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.960 [2024-04-15 16:10:48.714568] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.960 [2024-04-15 16:10:48.714583] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.960 [2024-04-15 16:10:48.714595] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.960 [2024-04-15 16:10:48.714600] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.960 [2024-04-15 16:10:48.714605] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.960 [2024-04-15 16:10:48.714612] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.960 [2024-04-15 16:10:48.714628] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.960 [2024-04-15 16:10:48.714677] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.960 [2024-04-15 16:10:48.714684] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.960 [2024-04-15 16:10:48.714689] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.960 [2024-04-15 16:10:48.714694] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.960 [2024-04-15 16:10:48.714705] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.960 [2024-04-15 16:10:48.714710] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.960 [2024-04-15 16:10:48.714715] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.960 [2024-04-15 16:10:48.714722] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.960 [2024-04-15 16:10:48.714738] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.960 [2024-04-15 16:10:48.714781] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.960 [2024-04-15 16:10:48.714788] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.960 [2024-04-15 16:10:48.714792] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.960 [2024-04-15 16:10:48.714797] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.960 [2024-04-15 16:10:48.714809] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.960 [2024-04-15 16:10:48.714814] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.960 [2024-04-15 16:10:48.714818] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.960 [2024-04-15 16:10:48.714826] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.960 [2024-04-15 16:10:48.714841] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.960 [2024-04-15 16:10:48.714896] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.960 [2024-04-15 16:10:48.714903] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.960 [2024-04-15 16:10:48.714908] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.960 [2024-04-15 16:10:48.714912] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.960 [2024-04-15 16:10:48.714924] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.960 [2024-04-15 16:10:48.714929] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.960 [2024-04-15 16:10:48.714934] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.960 [2024-04-15 16:10:48.714942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.960 [2024-04-15 16:10:48.714957] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.960 [2024-04-15 16:10:48.715009] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.960 [2024-04-15 16:10:48.715016] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.960 [2024-04-15 16:10:48.715021] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.960 [2024-04-15 16:10:48.715026] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.960 [2024-04-15 16:10:48.715037] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.960 [2024-04-15 16:10:48.715042] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.960 [2024-04-15 16:10:48.715047] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.960 [2024-04-15 16:10:48.715054] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.960 [2024-04-15 16:10:48.715070] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.960 [2024-04-15 16:10:48.715128] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.960 [2024-04-15 16:10:48.715138] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.960 [2024-04-15 16:10:48.715143] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.960 [2024-04-15 16:10:48.715148] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.960 [2024-04-15 16:10:48.715160] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.960 [2024-04-15 16:10:48.715165] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.960 [2024-04-15 16:10:48.715170] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.960 [2024-04-15 16:10:48.715177] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.960 [2024-04-15 16:10:48.715192] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.960 [2024-04-15 16:10:48.715240] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.960 [2024-04-15 16:10:48.715248] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.961 [2024-04-15 16:10:48.715253] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.961 [2024-04-15 16:10:48.715258] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.961 [2024-04-15 16:10:48.715269] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.961 [2024-04-15 16:10:48.715274] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.961 [2024-04-15 16:10:48.715279] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.961 [2024-04-15 16:10:48.715286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.961 [2024-04-15 16:10:48.715301] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.961 [2024-04-15 16:10:48.715355] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.961 [2024-04-15 16:10:48.715365] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.961 [2024-04-15 16:10:48.715370] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.961 [2024-04-15 16:10:48.715375] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.961 [2024-04-15 16:10:48.715387] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.961 [2024-04-15 16:10:48.715392] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.961 [2024-04-15 16:10:48.715396] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.961 [2024-04-15 16:10:48.715404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.961 [2024-04-15 16:10:48.715420] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.961 [2024-04-15 16:10:48.715465] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.961 [2024-04-15 16:10:48.715472] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.961 [2024-04-15 16:10:48.715477] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.961 [2024-04-15 16:10:48.715482] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.961 [2024-04-15 16:10:48.715493] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.961 [2024-04-15 16:10:48.715498] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.961 [2024-04-15 16:10:48.715503] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.961 [2024-04-15 16:10:48.715511] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.961 [2024-04-15 16:10:48.715526] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.961 [2024-04-15 16:10:48.715589] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.961 [2024-04-15 16:10:48.715597] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.961 [2024-04-15 16:10:48.715601] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.961 [2024-04-15 16:10:48.715606] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.961 [2024-04-15 16:10:48.715618] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.961 [2024-04-15 16:10:48.715623] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.961 [2024-04-15 16:10:48.715627] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.961 [2024-04-15 16:10:48.715635] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.961 [2024-04-15 16:10:48.715651] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.961 [2024-04-15 16:10:48.715703] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.961 [2024-04-15 16:10:48.715710] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.961 [2024-04-15 16:10:48.715715] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.961 [2024-04-15 16:10:48.715720] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.961 [2024-04-15 16:10:48.715731] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.961 [2024-04-15 16:10:48.715736] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.961 [2024-04-15 16:10:48.715741] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.961 [2024-04-15 16:10:48.715748] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.961 [2024-04-15 16:10:48.715764] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.961 [2024-04-15 16:10:48.715812] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.961 [2024-04-15 16:10:48.715819] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.961 [2024-04-15 16:10:48.715823] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.961 [2024-04-15 16:10:48.715828] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.961 [2024-04-15 16:10:48.715840] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.961 [2024-04-15 16:10:48.715845] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.961 [2024-04-15 16:10:48.715849] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.961 [2024-04-15 16:10:48.715857] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.961 [2024-04-15 16:10:48.715872] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.961 [2024-04-15 16:10:48.715914] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.961 [2024-04-15 16:10:48.715921] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.961 [2024-04-15 16:10:48.715925] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.961 [2024-04-15 16:10:48.715930] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.961 [2024-04-15 16:10:48.715942] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.961 [2024-04-15 16:10:48.715947] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.961 [2024-04-15 16:10:48.715951] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.961 [2024-04-15 16:10:48.715959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.961 [2024-04-15 16:10:48.715974] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.961 [2024-04-15 16:10:48.716017] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.961 [2024-04-15 16:10:48.716024] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.961 [2024-04-15 16:10:48.716029] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.961 [2024-04-15 16:10:48.716033] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.961 [2024-04-15 16:10:48.716045] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.961 [2024-04-15 16:10:48.716050] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.961 [2024-04-15 16:10:48.716054] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.961 [2024-04-15 16:10:48.716062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.961 [2024-04-15 16:10:48.716077] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.961 [2024-04-15 16:10:48.716122] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.961 [2024-04-15 16:10:48.716129] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.961 [2024-04-15 16:10:48.716134] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.961 [2024-04-15 16:10:48.716139] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.961 [2024-04-15 16:10:48.716150] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.961 [2024-04-15 16:10:48.716156] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.961 [2024-04-15 16:10:48.716160] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.961 [2024-04-15 16:10:48.716168] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.961 [2024-04-15 16:10:48.716183] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.961 [2024-04-15 16:10:48.716228] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.961 [2024-04-15 16:10:48.716235] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.961 [2024-04-15 16:10:48.716240] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.961 [2024-04-15 16:10:48.716245] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.961 [2024-04-15 16:10:48.716256] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.961 [2024-04-15 16:10:48.716261] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.961 [2024-04-15 16:10:48.716266] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.961 [2024-04-15 16:10:48.716273] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.961 [2024-04-15 16:10:48.716289] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.961 [2024-04-15 16:10:48.716334] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.961 [2024-04-15 16:10:48.716341] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.961 [2024-04-15 16:10:48.716346] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.961 [2024-04-15 16:10:48.716351] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.961 [2024-04-15 16:10:48.716362] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.961 [2024-04-15 16:10:48.716367] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.962 [2024-04-15 16:10:48.716372] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.962 [2024-04-15 16:10:48.716379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.962 [2024-04-15 16:10:48.716394] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.962 [2024-04-15 16:10:48.716439] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.962 [2024-04-15 16:10:48.716446] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.962 [2024-04-15 16:10:48.716451] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.962 [2024-04-15 16:10:48.716455] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.962 [2024-04-15 16:10:48.716467] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.962 [2024-04-15 16:10:48.716472] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.962 [2024-04-15 16:10:48.716476] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.962 [2024-04-15 16:10:48.716484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.962 [2024-04-15 16:10:48.716499] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.962 [2024-04-15 16:10:48.716541] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.962 [2024-04-15 16:10:48.716552] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.962 [2024-04-15 16:10:48.716556] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.962 [2024-04-15 16:10:48.716561] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.962 [2024-04-15 16:10:48.716582] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.962 [2024-04-15 16:10:48.716587] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.962 [2024-04-15 16:10:48.716592] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.962 [2024-04-15 16:10:48.716599] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.962 [2024-04-15 16:10:48.716615] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.962 [2024-04-15 16:10:48.716667] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.962 [2024-04-15 16:10:48.716677] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.962 [2024-04-15 16:10:48.716681] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.962 [2024-04-15 16:10:48.716686] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.962 [2024-04-15 16:10:48.716699] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.962 [2024-04-15 16:10:48.716703] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.962 [2024-04-15 16:10:48.716708] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.962 [2024-04-15 16:10:48.716716] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.962 [2024-04-15 16:10:48.716731] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.962 [2024-04-15 16:10:48.716777] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.962 [2024-04-15 16:10:48.716784] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.962 [2024-04-15 16:10:48.716788] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.962 [2024-04-15 16:10:48.716793] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.962 [2024-04-15 16:10:48.716805] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.962 [2024-04-15 16:10:48.716810] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.962 [2024-04-15 16:10:48.716815] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.962 [2024-04-15 16:10:48.716823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.962 [2024-04-15 16:10:48.716838] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.962 [2024-04-15 16:10:48.716889] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.962 [2024-04-15 16:10:48.716896] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.962 [2024-04-15 16:10:48.716901] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.962 [2024-04-15 16:10:48.716905] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.962 [2024-04-15 16:10:48.716917] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.962 [2024-04-15 16:10:48.716922] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.962 [2024-04-15 16:10:48.716926] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.962 [2024-04-15 16:10:48.716934] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.962 [2024-04-15 16:10:48.716949] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.962 [2024-04-15 16:10:48.717001] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.962 [2024-04-15 16:10:48.717008] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.962 [2024-04-15 16:10:48.717012] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.962 [2024-04-15 16:10:48.717017] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.962 [2024-04-15 16:10:48.717029] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.962 [2024-04-15 16:10:48.717034] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.962 [2024-04-15 16:10:48.717039] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.962 [2024-04-15 16:10:48.717046] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.962 [2024-04-15 16:10:48.717061] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.962 [2024-04-15 16:10:48.717113] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.962 [2024-04-15 16:10:48.717123] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.962 [2024-04-15 16:10:48.717128] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.962 [2024-04-15 16:10:48.717133] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.962 [2024-04-15 16:10:48.717144] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.962 [2024-04-15 16:10:48.717149] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.963 [2024-04-15 16:10:48.717154] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.963 [2024-04-15 16:10:48.717161] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.963 [2024-04-15 16:10:48.717177] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.963 [2024-04-15 16:10:48.717229] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.963 [2024-04-15 16:10:48.717236] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.963 [2024-04-15 16:10:48.717240] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.963 [2024-04-15 16:10:48.717245] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.963 [2024-04-15 16:10:48.717257] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.963 [2024-04-15 16:10:48.717262] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.963 [2024-04-15 16:10:48.717267] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.963 [2024-04-15 16:10:48.717274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.963 [2024-04-15 16:10:48.717289] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.963 [2024-04-15 16:10:48.717394] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.963 [2024-04-15 16:10:48.717402] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.963 [2024-04-15 16:10:48.717407] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.963 [2024-04-15 16:10:48.717412] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.963 [2024-04-15 16:10:48.717423] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.963 [2024-04-15 16:10:48.717428] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.963 [2024-04-15 16:10:48.717433] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.963 [2024-04-15 16:10:48.717441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.963 [2024-04-15 16:10:48.717456] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.963 [2024-04-15 16:10:48.717501] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.963 [2024-04-15 16:10:48.717508] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.963 [2024-04-15 16:10:48.717512] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.963 [2024-04-15 16:10:48.717517] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.963 [2024-04-15 16:10:48.717528] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.963 [2024-04-15 16:10:48.717533] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.963 [2024-04-15 16:10:48.717538] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.963 [2024-04-15 16:10:48.717546] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.963 [2024-04-15 16:10:48.717560] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.963 [2024-04-15 16:10:48.717623] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.963 [2024-04-15 16:10:48.717631] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.963 [2024-04-15 16:10:48.717635] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.963 [2024-04-15 16:10:48.717640] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.963 [2024-04-15 16:10:48.717651] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.963 [2024-04-15 16:10:48.717657] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.963 [2024-04-15 16:10:48.717661] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.963 [2024-04-15 16:10:48.717669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.963 [2024-04-15 16:10:48.717685] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.963 [2024-04-15 16:10:48.717726] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.963 [2024-04-15 16:10:48.717733] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.963 [2024-04-15 16:10:48.717738] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.963 [2024-04-15 16:10:48.717743] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.963 [2024-04-15 16:10:48.717754] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.963 [2024-04-15 16:10:48.717759] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.963 [2024-04-15 16:10:48.717763] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.963 [2024-04-15 16:10:48.717771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.963 [2024-04-15 16:10:48.717786] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.963 [2024-04-15 16:10:48.717837] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.963 [2024-04-15 16:10:48.717844] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.963 [2024-04-15 16:10:48.717848] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.963 [2024-04-15 16:10:48.717853] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.963 [2024-04-15 16:10:48.717865] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.963 [2024-04-15 16:10:48.717870] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.963 [2024-04-15 16:10:48.717874] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.963 [2024-04-15 16:10:48.717882] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.963 [2024-04-15 16:10:48.717897] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.963 [2024-04-15 16:10:48.717951] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.963 [2024-04-15 16:10:48.717958] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.963 [2024-04-15 16:10:48.717963] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.963 [2024-04-15 16:10:48.717968] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.963 [2024-04-15 16:10:48.717980] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.963 [2024-04-15 16:10:48.717985] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.963 [2024-04-15 16:10:48.717990] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.963 [2024-04-15 16:10:48.717997] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.963 [2024-04-15 16:10:48.718012] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.963 [2024-04-15 16:10:48.718064] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.963 [2024-04-15 16:10:48.718071] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.963 [2024-04-15 16:10:48.718075] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.963 [2024-04-15 16:10:48.718080] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.963 [2024-04-15 16:10:48.718092] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.963 [2024-04-15 16:10:48.718096] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.963 [2024-04-15 16:10:48.718101] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.963 [2024-04-15 16:10:48.718109] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.963 [2024-04-15 16:10:48.718124] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.963 [2024-04-15 16:10:48.718169] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.963 [2024-04-15 16:10:48.718176] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.963 [2024-04-15 16:10:48.718181] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.963 [2024-04-15 16:10:48.718185] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.963 [2024-04-15 16:10:48.718197] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.963 [2024-04-15 16:10:48.718202] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.963 [2024-04-15 16:10:48.718206] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.963 [2024-04-15 16:10:48.718214] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.963 [2024-04-15 16:10:48.718229] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.963 [2024-04-15 16:10:48.718271] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.963 [2024-04-15 16:10:48.718279] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.963 [2024-04-15 16:10:48.718283] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.964 [2024-04-15 16:10:48.718288] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.964 [2024-04-15 16:10:48.718300] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.964 [2024-04-15 16:10:48.718305] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.964 [2024-04-15 16:10:48.718309] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.964 [2024-04-15 16:10:48.718317] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.964 [2024-04-15 16:10:48.718332] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.964 [2024-04-15 16:10:48.718389] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.964 [2024-04-15 16:10:48.718396] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.964 [2024-04-15 16:10:48.718401] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.964 [2024-04-15 16:10:48.718405] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.964 [2024-04-15 16:10:48.718417] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.964 [2024-04-15 16:10:48.718422] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.964 [2024-04-15 16:10:48.718426] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.964 [2024-04-15 16:10:48.718434] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.964 [2024-04-15 16:10:48.718449] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.964 [2024-04-15 16:10:48.718494] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.964 [2024-04-15 16:10:48.718501] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.964 [2024-04-15 16:10:48.718505] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.964 [2024-04-15 16:10:48.718510] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.964 [2024-04-15 16:10:48.718522] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.964 [2024-04-15 16:10:48.718526] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.964 [2024-04-15 16:10:48.718531] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.964 [2024-04-15 16:10:48.718539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.964 [2024-04-15 16:10:48.718553] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.964 [2024-04-15 16:10:48.718614] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.964 [2024-04-15 16:10:48.718622] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.964 [2024-04-15 16:10:48.718626] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.964 [2024-04-15 16:10:48.718631] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.964 [2024-04-15 16:10:48.718643] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.964 [2024-04-15 16:10:48.718648] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.964 [2024-04-15 16:10:48.718653] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.964 [2024-04-15 16:10:48.718660] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.964 [2024-04-15 16:10:48.718676] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.964 [2024-04-15 16:10:48.718724] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.964 [2024-04-15 16:10:48.718734] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.964 [2024-04-15 16:10:48.718739] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.964 [2024-04-15 16:10:48.718744] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.964 [2024-04-15 16:10:48.718756] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.964 [2024-04-15 16:10:48.718761] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.964 [2024-04-15 16:10:48.718765] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.964 [2024-04-15 16:10:48.718772] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.964 [2024-04-15 16:10:48.718788] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.964 [2024-04-15 16:10:48.718830] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.964 [2024-04-15 16:10:48.718837] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.964 [2024-04-15 16:10:48.718841] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.964 [2024-04-15 16:10:48.718846] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.964 [2024-04-15 16:10:48.718857] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.964 [2024-04-15 16:10:48.718863] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.964 [2024-04-15 16:10:48.718867] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.964 [2024-04-15 16:10:48.718875] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.964 [2024-04-15 16:10:48.718889] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.964 [2024-04-15 16:10:48.718938] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.964 [2024-04-15 16:10:48.718945] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.964 [2024-04-15 16:10:48.718949] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.964 [2024-04-15 16:10:48.718954] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.964 [2024-04-15 16:10:48.718966] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.964 [2024-04-15 16:10:48.718971] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.964 [2024-04-15 16:10:48.718975] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.964 [2024-04-15 16:10:48.718983] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.964 [2024-04-15 16:10:48.718998] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.964 [2024-04-15 16:10:48.719040] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.964 [2024-04-15 16:10:48.719047] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.964 [2024-04-15 16:10:48.719051] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.964 [2024-04-15 16:10:48.719056] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.964 [2024-04-15 16:10:48.719068] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.964 [2024-04-15 16:10:48.719073] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.964 [2024-04-15 16:10:48.719077] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.964 [2024-04-15 16:10:48.719085] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.964 [2024-04-15 16:10:48.719100] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.964 [2024-04-15 16:10:48.719141] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.964 [2024-04-15 16:10:48.719148] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.964 [2024-04-15 16:10:48.719153] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.964 [2024-04-15 16:10:48.719157] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.964 [2024-04-15 16:10:48.719169] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.964 [2024-04-15 16:10:48.719174] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.964 [2024-04-15 16:10:48.719179] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.964 [2024-04-15 16:10:48.719186] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.964 [2024-04-15 16:10:48.719201] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.964 [2024-04-15 16:10:48.719246] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.964 [2024-04-15 16:10:48.719253] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.964 [2024-04-15 16:10:48.719257] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.964 [2024-04-15 16:10:48.719262] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.964 [2024-04-15 16:10:48.719273] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.964 [2024-04-15 16:10:48.719278] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.964 [2024-04-15 16:10:48.719283] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.964 [2024-04-15 16:10:48.719290] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.964 [2024-04-15 16:10:48.719305] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.964 [2024-04-15 16:10:48.719353] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.964 [2024-04-15 16:10:48.719363] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.964 [2024-04-15 16:10:48.719368] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.964 [2024-04-15 16:10:48.719373] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.964 [2024-04-15 16:10:48.719384] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.964 [2024-04-15 16:10:48.719389] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.964 [2024-04-15 16:10:48.719394] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.964 [2024-04-15 16:10:48.719401] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.964 [2024-04-15 16:10:48.719416] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.964 [2024-04-15 16:10:48.719471] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.964 [2024-04-15 16:10:48.719478] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.964 [2024-04-15 16:10:48.719483] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.964 [2024-04-15 16:10:48.719488] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.965 [2024-04-15 16:10:48.719499] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.965 [2024-04-15 16:10:48.719504] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.965 [2024-04-15 16:10:48.719509] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.965 [2024-04-15 16:10:48.719516] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.965 [2024-04-15 16:10:48.719531] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.965 [2024-04-15 16:10:48.732605] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.965 [2024-04-15 16:10:48.732633] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.965 [2024-04-15 16:10:48.732639] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.965 [2024-04-15 16:10:48.732644] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.965 [2024-04-15 16:10:48.732664] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:18.965 [2024-04-15 16:10:48.732670] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:18.965 [2024-04-15 16:10:48.732674] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x132b990) 00:18:18.965 [2024-04-15 16:10:48.732686] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.965 [2024-04-15 16:10:48.732730] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1362650, cid 3, qid 0 00:18:18.965 [2024-04-15 16:10:48.732806] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:18.965 [2024-04-15 16:10:48.732812] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:18.965 [2024-04-15 16:10:48.732817] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:18.965 [2024-04-15 16:10:48.732822] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1362650) on tqpair=0x132b990 00:18:18.965 [2024-04-15 16:10:48.732831] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 22 milliseconds 00:18:18.965 0 Kelvin (-273 Celsius) 00:18:18.965 Available Spare: 0% 00:18:18.965 Available Spare Threshold: 0% 00:18:18.965 Life Percentage Used: 0% 00:18:18.965 Data Units Read: 0 00:18:18.965 Data Units Written: 0 00:18:18.965 Host Read Commands: 0 00:18:18.965 Host Write Commands: 0 00:18:18.965 Controller Busy Time: 0 minutes 00:18:18.965 Power Cycles: 0 00:18:18.965 Power On Hours: 0 hours 00:18:18.965 Unsafe Shutdowns: 0 00:18:18.965 Unrecoverable Media Errors: 0 00:18:18.965 Lifetime Error Log Entries: 0 00:18:18.965 Warning Temperature Time: 0 minutes 00:18:18.965 Critical Temperature Time: 0 minutes 00:18:18.965 00:18:18.965 Number of Queues 00:18:18.965 ================ 00:18:18.965 Number of I/O Submission Queues: 127 00:18:18.965 Number of I/O Completion Queues: 127 00:18:18.965 00:18:18.965 Active Namespaces 00:18:18.965 ================= 00:18:18.965 Namespace ID:1 00:18:18.965 Error Recovery Timeout: Unlimited 00:18:18.965 Command Set Identifier: NVM (00h) 00:18:18.965 Deallocate: Supported 00:18:18.965 Deallocated/Unwritten Error: Not Supported 00:18:18.965 Deallocated Read Value: Unknown 00:18:18.965 Deallocate in Write Zeroes: Not Supported 00:18:18.965 Deallocated Guard Field: 0xFFFF 00:18:18.965 Flush: Supported 00:18:18.965 Reservation: Supported 00:18:18.965 Namespace Sharing Capabilities: Multiple Controllers 00:18:18.965 Size (in LBAs): 131072 (0GiB) 00:18:18.965 Capacity (in LBAs): 131072 (0GiB) 00:18:18.965 Utilization (in LBAs): 131072 (0GiB) 00:18:18.965 NGUID: ABCDEF0123456789ABCDEF0123456789 00:18:18.965 EUI64: ABCDEF0123456789 00:18:18.965 UUID: 65ec005f-f06b-4253-b77f-ab208917c20c 00:18:18.965 Thin Provisioning: Not Supported 00:18:18.965 Per-NS Atomic Units: Yes 00:18:18.965 Atomic Boundary Size (Normal): 0 00:18:18.965 Atomic Boundary Size (PFail): 0 00:18:18.965 Atomic Boundary Offset: 0 00:18:18.965 Maximum Single Source Range Length: 65535 00:18:18.965 Maximum Copy Length: 65535 00:18:18.965 Maximum Source Range Count: 1 00:18:18.965 NGUID/EUI64 Never Reused: No 00:18:18.965 Namespace Write Protected: No 00:18:18.965 Number of LBA Formats: 1 00:18:18.965 Current LBA Format: LBA Format #00 00:18:18.965 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:18.965 00:18:18.965 16:10:48 -- host/identify.sh@51 -- # sync 00:18:18.965 16:10:48 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:18.965 16:10:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:18.965 16:10:48 -- common/autotest_common.sh@10 -- # set +x 00:18:18.965 16:10:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:18.965 16:10:48 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:18:18.965 16:10:48 -- host/identify.sh@56 -- # nvmftestfini 00:18:18.965 16:10:48 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:18.965 16:10:48 -- nvmf/common.sh@117 -- # sync 00:18:18.965 16:10:48 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:18.965 16:10:48 -- nvmf/common.sh@120 -- # set +e 00:18:18.965 16:10:48 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:18.965 16:10:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:18.965 rmmod nvme_tcp 00:18:18.965 rmmod nvme_fabrics 00:18:18.965 16:10:48 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:18.965 16:10:48 -- nvmf/common.sh@124 -- # set -e 00:18:18.965 16:10:48 -- nvmf/common.sh@125 -- # return 0 00:18:18.965 16:10:48 -- nvmf/common.sh@478 -- # '[' -n 86079 ']' 00:18:18.965 16:10:48 -- nvmf/common.sh@479 -- # killprocess 86079 00:18:18.965 16:10:48 -- common/autotest_common.sh@936 -- # '[' -z 86079 ']' 00:18:18.965 16:10:48 -- common/autotest_common.sh@940 -- # kill -0 86079 00:18:18.965 16:10:48 -- common/autotest_common.sh@941 -- # uname 00:18:18.965 16:10:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:18.965 16:10:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86079 00:18:18.965 16:10:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:18.965 16:10:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:18.965 killing process with pid 86079 00:18:18.965 16:10:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86079' 00:18:18.965 16:10:48 -- common/autotest_common.sh@955 -- # kill 86079 00:18:18.965 16:10:48 -- common/autotest_common.sh@960 -- # wait 86079 00:18:18.965 [2024-04-15 16:10:48.850874] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:18:19.224 16:10:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:19.224 16:10:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:19.224 16:10:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:19.224 16:10:49 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:19.224 16:10:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:19.224 16:10:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:19.224 16:10:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:19.224 16:10:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:19.224 16:10:49 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:19.224 00:18:19.224 real 0m2.574s 00:18:19.224 user 0m6.715s 00:18:19.224 sys 0m0.720s 00:18:19.224 16:10:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:19.224 ************************************ 00:18:19.224 16:10:49 -- common/autotest_common.sh@10 -- # set +x 00:18:19.224 END TEST nvmf_identify 00:18:19.224 ************************************ 00:18:19.224 16:10:49 -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:18:19.224 16:10:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:19.224 16:10:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:19.224 16:10:49 -- common/autotest_common.sh@10 -- # set +x 00:18:19.483 ************************************ 00:18:19.483 START TEST nvmf_perf 00:18:19.483 ************************************ 00:18:19.483 16:10:49 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:18:19.483 * Looking for test storage... 00:18:19.483 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:19.483 16:10:49 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:19.483 16:10:49 -- nvmf/common.sh@7 -- # uname -s 00:18:19.483 16:10:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:19.483 16:10:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:19.483 16:10:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:19.483 16:10:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:19.483 16:10:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:19.483 16:10:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:19.483 16:10:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:19.483 16:10:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:19.483 16:10:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:19.483 16:10:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:19.483 16:10:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5cf30adc-e0ee-4255-9853-178d7d30983a 00:18:19.483 16:10:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=5cf30adc-e0ee-4255-9853-178d7d30983a 00:18:19.483 16:10:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:19.483 16:10:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:19.483 16:10:49 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:19.483 16:10:49 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:19.483 16:10:49 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:19.483 16:10:49 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:19.483 16:10:49 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:19.483 16:10:49 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:19.483 16:10:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.483 16:10:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.483 16:10:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.483 16:10:49 -- paths/export.sh@5 -- # export PATH 00:18:19.483 16:10:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.483 16:10:49 -- nvmf/common.sh@47 -- # : 0 00:18:19.483 16:10:49 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:19.483 16:10:49 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:19.483 16:10:49 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:19.483 16:10:49 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:19.483 16:10:49 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:19.483 16:10:49 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:19.483 16:10:49 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:19.483 16:10:49 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:19.483 16:10:49 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:19.483 16:10:49 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:19.483 16:10:49 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:19.483 16:10:49 -- host/perf.sh@17 -- # nvmftestinit 00:18:19.483 16:10:49 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:19.483 16:10:49 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:19.483 16:10:49 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:19.483 16:10:49 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:19.483 16:10:49 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:19.483 16:10:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:19.484 16:10:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:19.484 16:10:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:19.484 16:10:49 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:18:19.484 16:10:49 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:18:19.484 16:10:49 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:18:19.484 16:10:49 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:18:19.484 16:10:49 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:18:19.484 16:10:49 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:18:19.484 16:10:49 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:19.484 16:10:49 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:19.484 16:10:49 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:19.484 16:10:49 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:19.484 16:10:49 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:19.484 16:10:49 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:19.484 16:10:49 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:19.484 16:10:49 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:19.484 16:10:49 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:19.484 16:10:49 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:19.484 16:10:49 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:19.484 16:10:49 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:19.484 16:10:49 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:19.484 16:10:49 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:19.484 Cannot find device "nvmf_tgt_br" 00:18:19.484 16:10:49 -- nvmf/common.sh@155 -- # true 00:18:19.484 16:10:49 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:19.484 Cannot find device "nvmf_tgt_br2" 00:18:19.484 16:10:49 -- nvmf/common.sh@156 -- # true 00:18:19.484 16:10:49 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:19.484 16:10:49 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:19.484 Cannot find device "nvmf_tgt_br" 00:18:19.484 16:10:49 -- nvmf/common.sh@158 -- # true 00:18:19.484 16:10:49 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:19.484 Cannot find device "nvmf_tgt_br2" 00:18:19.484 16:10:49 -- nvmf/common.sh@159 -- # true 00:18:19.484 16:10:49 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:19.742 16:10:49 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:19.742 16:10:49 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:19.742 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:19.742 16:10:49 -- nvmf/common.sh@162 -- # true 00:18:19.742 16:10:49 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:19.742 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:19.742 16:10:49 -- nvmf/common.sh@163 -- # true 00:18:19.742 16:10:49 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:19.742 16:10:49 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:19.742 16:10:49 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:19.742 16:10:49 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:19.742 16:10:49 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:19.742 16:10:49 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:19.742 16:10:49 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:19.742 16:10:49 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:19.742 16:10:49 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:19.742 16:10:49 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:19.742 16:10:49 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:19.742 16:10:49 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:19.742 16:10:49 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:19.742 16:10:49 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:19.742 16:10:49 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:19.742 16:10:49 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:19.742 16:10:49 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:19.742 16:10:49 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:19.742 16:10:49 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:19.742 16:10:49 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:19.742 16:10:49 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:19.742 16:10:49 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:19.742 16:10:49 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:20.000 16:10:49 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:20.000 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:20.000 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:18:20.000 00:18:20.000 --- 10.0.0.2 ping statistics --- 00:18:20.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:20.000 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:18:20.000 16:10:49 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:20.000 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:20.000 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:18:20.000 00:18:20.000 --- 10.0.0.3 ping statistics --- 00:18:20.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:20.000 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:18:20.000 16:10:49 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:20.000 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:20.000 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:18:20.000 00:18:20.000 --- 10.0.0.1 ping statistics --- 00:18:20.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:20.000 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:18:20.000 16:10:49 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:20.000 16:10:49 -- nvmf/common.sh@422 -- # return 0 00:18:20.000 16:10:49 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:20.000 16:10:49 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:20.000 16:10:49 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:20.000 16:10:49 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:20.000 16:10:49 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:20.000 16:10:49 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:20.000 16:10:49 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:20.000 16:10:49 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:18:20.000 16:10:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:20.000 16:10:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:20.000 16:10:49 -- common/autotest_common.sh@10 -- # set +x 00:18:20.000 16:10:49 -- nvmf/common.sh@470 -- # nvmfpid=86295 00:18:20.000 16:10:49 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:20.000 16:10:49 -- nvmf/common.sh@471 -- # waitforlisten 86295 00:18:20.000 16:10:49 -- common/autotest_common.sh@817 -- # '[' -z 86295 ']' 00:18:20.000 16:10:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:20.000 16:10:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:20.000 16:10:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:20.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:20.000 16:10:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:20.000 16:10:49 -- common/autotest_common.sh@10 -- # set +x 00:18:20.000 [2024-04-15 16:10:49.806787] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:18:20.000 [2024-04-15 16:10:49.807141] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:20.000 [2024-04-15 16:10:49.955558] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:20.257 [2024-04-15 16:10:50.006282] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:20.257 [2024-04-15 16:10:50.006617] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:20.257 [2024-04-15 16:10:50.006816] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:20.257 [2024-04-15 16:10:50.006993] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:20.257 [2024-04-15 16:10:50.007195] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:20.257 [2024-04-15 16:10:50.007459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:20.257 [2024-04-15 16:10:50.007615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:20.257 [2024-04-15 16:10:50.008546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:20.257 [2024-04-15 16:10:50.008561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:20.822 16:10:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:20.822 16:10:50 -- common/autotest_common.sh@850 -- # return 0 00:18:20.822 16:10:50 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:20.822 16:10:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:20.822 16:10:50 -- common/autotest_common.sh@10 -- # set +x 00:18:20.822 16:10:50 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:20.822 16:10:50 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:18:20.822 16:10:50 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:21.389 16:10:51 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:18:21.389 16:10:51 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:18:21.389 16:10:51 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:18:21.389 16:10:51 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:21.954 16:10:51 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:18:21.954 16:10:51 -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:18:21.954 16:10:51 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:18:21.954 16:10:51 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:18:21.954 16:10:51 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:21.954 [2024-04-15 16:10:51.921028] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:22.212 16:10:51 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:22.469 16:10:52 -- host/perf.sh@45 -- # for bdev in $bdevs 00:18:22.469 16:10:52 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:22.469 16:10:52 -- host/perf.sh@45 -- # for bdev in $bdevs 00:18:22.469 16:10:52 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:18:22.727 16:10:52 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:22.984 [2024-04-15 16:10:52.814222] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:22.984 16:10:52 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:23.243 16:10:53 -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:18:23.243 16:10:53 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:18:23.243 16:10:53 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:18:23.243 16:10:53 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:18:24.616 Initializing NVMe Controllers 00:18:24.616 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:18:24.616 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:18:24.616 Initialization complete. Launching workers. 00:18:24.616 ======================================================== 00:18:24.616 Latency(us) 00:18:24.616 Device Information : IOPS MiB/s Average min max 00:18:24.616 PCIE (0000:00:10.0) NSID 1 from core 0: 24351.49 95.12 1313.20 306.90 17610.05 00:18:24.616 ======================================================== 00:18:24.616 Total : 24351.49 95.12 1313.20 306.90 17610.05 00:18:24.616 00:18:24.616 16:10:54 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:18:25.988 Initializing NVMe Controllers 00:18:25.988 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:25.988 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:25.988 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:25.988 Initialization complete. Launching workers. 00:18:25.988 ======================================================== 00:18:25.988 Latency(us) 00:18:25.988 Device Information : IOPS MiB/s Average min max 00:18:25.988 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3723.41 14.54 268.30 99.21 15194.39 00:18:25.988 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 37.55 0.15 26631.40 23969.78 35998.62 00:18:25.988 ======================================================== 00:18:25.988 Total : 3760.96 14.69 531.52 99.21 35998.62 00:18:25.988 00:18:25.988 16:10:55 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:18:27.360 Initializing NVMe Controllers 00:18:27.360 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:27.360 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:27.360 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:27.360 Initialization complete. Launching workers. 00:18:27.360 ======================================================== 00:18:27.360 Latency(us) 00:18:27.360 Device Information : IOPS MiB/s Average min max 00:18:27.360 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8400.20 32.81 3810.18 482.16 17207.94 00:18:27.360 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1252.13 4.89 25848.36 16738.15 35725.71 00:18:27.360 ======================================================== 00:18:27.360 Total : 9652.33 37.70 6669.04 482.16 35725.71 00:18:27.360 00:18:27.360 16:10:57 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:18:27.360 16:10:57 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:18:30.644 Initializing NVMe Controllers 00:18:30.644 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:30.644 Controller IO queue size 128, less than required. 00:18:30.644 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:30.644 Controller IO queue size 128, less than required. 00:18:30.644 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:30.644 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:30.644 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:30.644 Initialization complete. Launching workers. 00:18:30.644 ======================================================== 00:18:30.644 Latency(us) 00:18:30.644 Device Information : IOPS MiB/s Average min max 00:18:30.644 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1732.99 433.25 75789.58 50259.00 129650.01 00:18:30.644 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 254.50 63.62 577938.31 286862.52 1055070.00 00:18:30.644 ======================================================== 00:18:30.644 Total : 1987.49 496.87 140089.88 50259.00 1055070.00 00:18:30.644 00:18:30.644 16:11:00 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:18:30.902 No valid NVMe controllers or AIO or URING devices found 00:18:30.902 Initializing NVMe Controllers 00:18:30.902 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:30.902 Controller IO queue size 128, less than required. 00:18:30.902 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:30.902 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:18:30.902 Controller IO queue size 128, less than required. 00:18:30.902 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:30.902 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:18:30.902 WARNING: Some requested NVMe devices were skipped 00:18:30.902 16:11:00 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:18:34.221 Initializing NVMe Controllers 00:18:34.221 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:34.221 Controller IO queue size 128, less than required. 00:18:34.221 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:34.221 Controller IO queue size 128, less than required. 00:18:34.221 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:34.221 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:34.221 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:34.221 Initialization complete. Launching workers. 00:18:34.221 00:18:34.221 ==================== 00:18:34.221 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:18:34.221 TCP transport: 00:18:34.221 polls: 16989 00:18:34.221 idle_polls: 0 00:18:34.221 sock_completions: 16989 00:18:34.221 nvme_completions: 6599 00:18:34.221 submitted_requests: 9772 00:18:34.221 queued_requests: 1 00:18:34.221 00:18:34.221 ==================== 00:18:34.221 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:18:34.221 TCP transport: 00:18:34.221 polls: 18529 00:18:34.221 idle_polls: 0 00:18:34.221 sock_completions: 18529 00:18:34.221 nvme_completions: 6197 00:18:34.221 submitted_requests: 9290 00:18:34.221 queued_requests: 1 00:18:34.221 ======================================================== 00:18:34.221 Latency(us) 00:18:34.221 Device Information : IOPS MiB/s Average min max 00:18:34.221 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1647.70 411.92 78455.81 49456.06 134878.55 00:18:34.221 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1547.31 386.83 84604.02 37265.57 142809.93 00:18:34.221 ======================================================== 00:18:34.221 Total : 3195.01 798.75 81433.32 37265.57 142809.93 00:18:34.221 00:18:34.221 16:11:03 -- host/perf.sh@66 -- # sync 00:18:34.221 16:11:03 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:34.221 16:11:04 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:18:34.221 16:11:04 -- host/perf.sh@71 -- # '[' -n 0000:00:10.0 ']' 00:18:34.221 16:11:04 -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:18:34.493 16:11:04 -- host/perf.sh@72 -- # ls_guid=89491240-8a3b-49d9-a958-8c052fe8e207 00:18:34.494 16:11:04 -- host/perf.sh@73 -- # get_lvs_free_mb 89491240-8a3b-49d9-a958-8c052fe8e207 00:18:34.494 16:11:04 -- common/autotest_common.sh@1350 -- # local lvs_uuid=89491240-8a3b-49d9-a958-8c052fe8e207 00:18:34.494 16:11:04 -- common/autotest_common.sh@1351 -- # local lvs_info 00:18:34.494 16:11:04 -- common/autotest_common.sh@1352 -- # local fc 00:18:34.494 16:11:04 -- common/autotest_common.sh@1353 -- # local cs 00:18:34.494 16:11:04 -- common/autotest_common.sh@1354 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:34.755 16:11:04 -- common/autotest_common.sh@1354 -- # lvs_info='[ 00:18:34.755 { 00:18:34.755 "uuid": "89491240-8a3b-49d9-a958-8c052fe8e207", 00:18:34.755 "name": "lvs_0", 00:18:34.755 "base_bdev": "Nvme0n1", 00:18:34.755 "total_data_clusters": 1278, 00:18:34.755 "free_clusters": 1278, 00:18:34.755 "block_size": 4096, 00:18:34.755 "cluster_size": 4194304 00:18:34.755 } 00:18:34.755 ]' 00:18:34.755 16:11:04 -- common/autotest_common.sh@1355 -- # jq '.[] | select(.uuid=="89491240-8a3b-49d9-a958-8c052fe8e207") .free_clusters' 00:18:34.755 16:11:04 -- common/autotest_common.sh@1355 -- # fc=1278 00:18:34.755 16:11:04 -- common/autotest_common.sh@1356 -- # jq '.[] | select(.uuid=="89491240-8a3b-49d9-a958-8c052fe8e207") .cluster_size' 00:18:34.755 5112 00:18:34.755 16:11:04 -- common/autotest_common.sh@1356 -- # cs=4194304 00:18:34.755 16:11:04 -- common/autotest_common.sh@1359 -- # free_mb=5112 00:18:34.755 16:11:04 -- common/autotest_common.sh@1360 -- # echo 5112 00:18:34.755 16:11:04 -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:18:34.755 16:11:04 -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 89491240-8a3b-49d9-a958-8c052fe8e207 lbd_0 5112 00:18:35.320 16:11:04 -- host/perf.sh@80 -- # lb_guid=b27a7785-c854-4b67-b73d-1f3c31b18c7c 00:18:35.320 16:11:04 -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore b27a7785-c854-4b67-b73d-1f3c31b18c7c lvs_n_0 00:18:35.577 16:11:05 -- host/perf.sh@83 -- # ls_nested_guid=bc3e8bd2-59b1-4fed-98b0-506c16fe727b 00:18:35.577 16:11:05 -- host/perf.sh@84 -- # get_lvs_free_mb bc3e8bd2-59b1-4fed-98b0-506c16fe727b 00:18:35.577 16:11:05 -- common/autotest_common.sh@1350 -- # local lvs_uuid=bc3e8bd2-59b1-4fed-98b0-506c16fe727b 00:18:35.577 16:11:05 -- common/autotest_common.sh@1351 -- # local lvs_info 00:18:35.577 16:11:05 -- common/autotest_common.sh@1352 -- # local fc 00:18:35.577 16:11:05 -- common/autotest_common.sh@1353 -- # local cs 00:18:35.577 16:11:05 -- common/autotest_common.sh@1354 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:35.834 16:11:05 -- common/autotest_common.sh@1354 -- # lvs_info='[ 00:18:35.834 { 00:18:35.834 "uuid": "89491240-8a3b-49d9-a958-8c052fe8e207", 00:18:35.834 "name": "lvs_0", 00:18:35.834 "base_bdev": "Nvme0n1", 00:18:35.834 "total_data_clusters": 1278, 00:18:35.834 "free_clusters": 0, 00:18:35.834 "block_size": 4096, 00:18:35.834 "cluster_size": 4194304 00:18:35.834 }, 00:18:35.834 { 00:18:35.834 "uuid": "bc3e8bd2-59b1-4fed-98b0-506c16fe727b", 00:18:35.834 "name": "lvs_n_0", 00:18:35.834 "base_bdev": "b27a7785-c854-4b67-b73d-1f3c31b18c7c", 00:18:35.834 "total_data_clusters": 1276, 00:18:35.834 "free_clusters": 1276, 00:18:35.834 "block_size": 4096, 00:18:35.834 "cluster_size": 4194304 00:18:35.834 } 00:18:35.834 ]' 00:18:35.834 16:11:05 -- common/autotest_common.sh@1355 -- # jq '.[] | select(.uuid=="bc3e8bd2-59b1-4fed-98b0-506c16fe727b") .free_clusters' 00:18:35.834 16:11:05 -- common/autotest_common.sh@1355 -- # fc=1276 00:18:35.834 16:11:05 -- common/autotest_common.sh@1356 -- # jq '.[] | select(.uuid=="bc3e8bd2-59b1-4fed-98b0-506c16fe727b") .cluster_size' 00:18:35.834 5104 00:18:35.834 16:11:05 -- common/autotest_common.sh@1356 -- # cs=4194304 00:18:35.834 16:11:05 -- common/autotest_common.sh@1359 -- # free_mb=5104 00:18:35.834 16:11:05 -- common/autotest_common.sh@1360 -- # echo 5104 00:18:35.834 16:11:05 -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:18:35.834 16:11:05 -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u bc3e8bd2-59b1-4fed-98b0-506c16fe727b lbd_nest_0 5104 00:18:36.400 16:11:06 -- host/perf.sh@88 -- # lb_nested_guid=e75865de-8c02-4ed9-ace2-77c13e1a9209 00:18:36.400 16:11:06 -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:36.657 16:11:06 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:18:36.657 16:11:06 -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 e75865de-8c02-4ed9-ace2-77c13e1a9209 00:18:36.915 16:11:06 -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:37.173 16:11:07 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:18:37.173 16:11:07 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:18:37.173 16:11:07 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:18:37.173 16:11:07 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:37.173 16:11:07 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:18:37.770 No valid NVMe controllers or AIO or URING devices found 00:18:38.028 Initializing NVMe Controllers 00:18:38.028 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:38.028 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:18:38.028 WARNING: Some requested NVMe devices were skipped 00:18:38.028 16:11:07 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:38.028 16:11:07 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:18:50.217 Initializing NVMe Controllers 00:18:50.217 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:50.217 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:50.217 Initialization complete. Launching workers. 00:18:50.217 ======================================================== 00:18:50.217 Latency(us) 00:18:50.217 Device Information : IOPS MiB/s Average min max 00:18:50.217 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1014.61 126.83 982.78 274.74 17510.40 00:18:50.218 ======================================================== 00:18:50.218 Total : 1014.61 126.83 982.78 274.74 17510.40 00:18:50.218 00:18:50.218 16:11:18 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:18:50.218 16:11:18 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:50.218 16:11:18 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:18:50.218 No valid NVMe controllers or AIO or URING devices found 00:18:50.218 Initializing NVMe Controllers 00:18:50.218 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:50.218 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:18:50.218 WARNING: Some requested NVMe devices were skipped 00:18:50.218 16:11:18 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:50.218 16:11:18 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:00.190 Initializing NVMe Controllers 00:19:00.190 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:00.190 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:00.190 Initialization complete. Launching workers. 00:19:00.190 ======================================================== 00:19:00.190 Latency(us) 00:19:00.190 Device Information : IOPS MiB/s Average min max 00:19:00.190 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 386.46 48.31 83336.42 18547.84 187827.74 00:19:00.190 ======================================================== 00:19:00.190 Total : 386.46 48.31 83336.42 18547.84 187827.74 00:19:00.190 00:19:00.190 16:11:29 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:19:00.190 16:11:29 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:19:00.190 16:11:29 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:00.190 No valid NVMe controllers or AIO or URING devices found 00:19:00.190 Initializing NVMe Controllers 00:19:00.190 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:00.190 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:19:00.190 WARNING: Some requested NVMe devices were skipped 00:19:00.190 16:11:29 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:19:00.190 16:11:29 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:12.386 Initializing NVMe Controllers 00:19:12.386 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:12.386 Controller IO queue size 128, less than required. 00:19:12.386 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:12.386 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:12.386 Initialization complete. Launching workers. 00:19:12.386 ======================================================== 00:19:12.386 Latency(us) 00:19:12.386 Device Information : IOPS MiB/s Average min max 00:19:12.386 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3508.10 438.51 36534.94 10894.32 94157.91 00:19:12.386 ======================================================== 00:19:12.386 Total : 3508.10 438.51 36534.94 10894.32 94157.91 00:19:12.386 00:19:12.386 16:11:40 -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:12.386 16:11:40 -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete e75865de-8c02-4ed9-ace2-77c13e1a9209 00:19:12.386 16:11:40 -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:19:12.386 16:11:41 -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete b27a7785-c854-4b67-b73d-1f3c31b18c7c 00:19:12.386 16:11:41 -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:19:12.386 16:11:41 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:19:12.386 16:11:41 -- host/perf.sh@114 -- # nvmftestfini 00:19:12.386 16:11:41 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:12.386 16:11:41 -- nvmf/common.sh@117 -- # sync 00:19:12.386 16:11:41 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:12.386 16:11:41 -- nvmf/common.sh@120 -- # set +e 00:19:12.386 16:11:41 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:12.386 16:11:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:12.386 rmmod nvme_tcp 00:19:12.386 rmmod nvme_fabrics 00:19:12.386 16:11:41 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:12.386 16:11:41 -- nvmf/common.sh@124 -- # set -e 00:19:12.386 16:11:41 -- nvmf/common.sh@125 -- # return 0 00:19:12.386 16:11:41 -- nvmf/common.sh@478 -- # '[' -n 86295 ']' 00:19:12.386 16:11:41 -- nvmf/common.sh@479 -- # killprocess 86295 00:19:12.386 16:11:41 -- common/autotest_common.sh@936 -- # '[' -z 86295 ']' 00:19:12.386 16:11:41 -- common/autotest_common.sh@940 -- # kill -0 86295 00:19:12.386 16:11:41 -- common/autotest_common.sh@941 -- # uname 00:19:12.386 16:11:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:12.386 16:11:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86295 00:19:12.386 killing process with pid 86295 00:19:12.386 16:11:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:12.386 16:11:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:12.386 16:11:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86295' 00:19:12.386 16:11:41 -- common/autotest_common.sh@955 -- # kill 86295 00:19:12.386 16:11:41 -- common/autotest_common.sh@960 -- # wait 86295 00:19:12.952 16:11:42 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:12.952 16:11:42 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:12.952 16:11:42 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:12.952 16:11:42 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:12.952 16:11:42 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:12.952 16:11:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:12.952 16:11:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:12.952 16:11:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:12.952 16:11:42 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:12.952 ************************************ 00:19:12.952 END TEST nvmf_perf 00:19:12.952 ************************************ 00:19:12.952 00:19:12.952 real 0m53.671s 00:19:12.952 user 3m21.258s 00:19:12.952 sys 0m15.127s 00:19:12.952 16:11:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:12.952 16:11:42 -- common/autotest_common.sh@10 -- # set +x 00:19:13.294 16:11:42 -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:19:13.294 16:11:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:13.294 16:11:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:13.294 16:11:42 -- common/autotest_common.sh@10 -- # set +x 00:19:13.294 ************************************ 00:19:13.294 START TEST nvmf_fio_host 00:19:13.294 ************************************ 00:19:13.294 16:11:43 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:19:13.294 * Looking for test storage... 00:19:13.294 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:13.294 16:11:43 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:13.294 16:11:43 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:13.294 16:11:43 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:13.294 16:11:43 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:13.294 16:11:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.294 16:11:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.294 16:11:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.294 16:11:43 -- paths/export.sh@5 -- # export PATH 00:19:13.294 16:11:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.294 16:11:43 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:13.294 16:11:43 -- nvmf/common.sh@7 -- # uname -s 00:19:13.294 16:11:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:13.294 16:11:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:13.294 16:11:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:13.294 16:11:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:13.294 16:11:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:13.294 16:11:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:13.294 16:11:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:13.294 16:11:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:13.294 16:11:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:13.294 16:11:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:13.294 16:11:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5cf30adc-e0ee-4255-9853-178d7d30983a 00:19:13.294 16:11:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=5cf30adc-e0ee-4255-9853-178d7d30983a 00:19:13.294 16:11:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:13.294 16:11:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:13.294 16:11:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:13.294 16:11:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:13.294 16:11:43 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:13.294 16:11:43 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:13.294 16:11:43 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:13.294 16:11:43 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:13.294 16:11:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.295 16:11:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.295 16:11:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.295 16:11:43 -- paths/export.sh@5 -- # export PATH 00:19:13.295 16:11:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.295 16:11:43 -- nvmf/common.sh@47 -- # : 0 00:19:13.295 16:11:43 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:13.295 16:11:43 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:13.295 16:11:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:13.295 16:11:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:13.295 16:11:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:13.295 16:11:43 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:13.295 16:11:43 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:13.295 16:11:43 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:13.295 16:11:43 -- host/fio.sh@12 -- # nvmftestinit 00:19:13.295 16:11:43 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:13.295 16:11:43 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:13.295 16:11:43 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:13.295 16:11:43 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:13.295 16:11:43 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:13.295 16:11:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:13.295 16:11:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:13.295 16:11:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:13.295 16:11:43 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:19:13.295 16:11:43 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:19:13.295 16:11:43 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:19:13.295 16:11:43 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:19:13.295 16:11:43 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:19:13.295 16:11:43 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:19:13.295 16:11:43 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:13.295 16:11:43 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:13.295 16:11:43 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:13.295 16:11:43 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:13.295 16:11:43 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:13.295 16:11:43 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:13.295 16:11:43 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:13.295 16:11:43 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:13.295 16:11:43 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:13.295 16:11:43 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:13.295 16:11:43 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:13.295 16:11:43 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:13.295 16:11:43 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:13.295 16:11:43 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:13.295 Cannot find device "nvmf_tgt_br" 00:19:13.295 16:11:43 -- nvmf/common.sh@155 -- # true 00:19:13.295 16:11:43 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:13.295 Cannot find device "nvmf_tgt_br2" 00:19:13.295 16:11:43 -- nvmf/common.sh@156 -- # true 00:19:13.295 16:11:43 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:13.295 16:11:43 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:13.295 Cannot find device "nvmf_tgt_br" 00:19:13.295 16:11:43 -- nvmf/common.sh@158 -- # true 00:19:13.295 16:11:43 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:13.295 Cannot find device "nvmf_tgt_br2" 00:19:13.295 16:11:43 -- nvmf/common.sh@159 -- # true 00:19:13.295 16:11:43 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:13.295 16:11:43 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:13.553 16:11:43 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:13.553 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:13.553 16:11:43 -- nvmf/common.sh@162 -- # true 00:19:13.553 16:11:43 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:13.553 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:13.553 16:11:43 -- nvmf/common.sh@163 -- # true 00:19:13.553 16:11:43 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:13.553 16:11:43 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:13.553 16:11:43 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:13.553 16:11:43 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:13.553 16:11:43 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:13.553 16:11:43 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:13.553 16:11:43 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:13.553 16:11:43 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:13.553 16:11:43 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:13.553 16:11:43 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:13.553 16:11:43 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:13.553 16:11:43 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:13.553 16:11:43 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:13.553 16:11:43 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:13.553 16:11:43 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:13.553 16:11:43 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:13.553 16:11:43 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:13.553 16:11:43 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:13.553 16:11:43 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:13.554 16:11:43 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:13.554 16:11:43 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:13.554 16:11:43 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:13.554 16:11:43 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:13.554 16:11:43 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:13.554 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:13.554 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:19:13.554 00:19:13.554 --- 10.0.0.2 ping statistics --- 00:19:13.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:13.554 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:19:13.554 16:11:43 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:13.554 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:13.554 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:19:13.554 00:19:13.554 --- 10.0.0.3 ping statistics --- 00:19:13.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:13.554 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:19:13.554 16:11:43 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:13.554 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:13.554 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:19:13.554 00:19:13.554 --- 10.0.0.1 ping statistics --- 00:19:13.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:13.554 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:19:13.554 16:11:43 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:13.554 16:11:43 -- nvmf/common.sh@422 -- # return 0 00:19:13.554 16:11:43 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:13.554 16:11:43 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:13.554 16:11:43 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:13.554 16:11:43 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:13.554 16:11:43 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:13.554 16:11:43 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:13.554 16:11:43 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:13.815 16:11:43 -- host/fio.sh@14 -- # [[ y != y ]] 00:19:13.815 16:11:43 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:19:13.815 16:11:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:13.815 16:11:43 -- common/autotest_common.sh@10 -- # set +x 00:19:13.815 16:11:43 -- host/fio.sh@22 -- # nvmfpid=87141 00:19:13.815 16:11:43 -- host/fio.sh@21 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:13.815 16:11:43 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:13.815 16:11:43 -- host/fio.sh@26 -- # waitforlisten 87141 00:19:13.815 16:11:43 -- common/autotest_common.sh@817 -- # '[' -z 87141 ']' 00:19:13.815 16:11:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:13.815 16:11:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:13.816 16:11:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:13.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:13.816 16:11:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:13.816 16:11:43 -- common/autotest_common.sh@10 -- # set +x 00:19:13.816 [2024-04-15 16:11:43.575648] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:19:13.816 [2024-04-15 16:11:43.575919] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:13.816 [2024-04-15 16:11:43.728760] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:14.097 [2024-04-15 16:11:43.804640] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:14.097 [2024-04-15 16:11:43.804973] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:14.097 [2024-04-15 16:11:43.805203] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:14.097 [2024-04-15 16:11:43.805458] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:14.097 [2024-04-15 16:11:43.805697] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:14.097 [2024-04-15 16:11:43.805853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:14.097 [2024-04-15 16:11:43.805919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:14.097 [2024-04-15 16:11:43.806635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:14.097 [2024-04-15 16:11:43.806649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:14.676 16:11:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:14.676 16:11:44 -- common/autotest_common.sh@850 -- # return 0 00:19:14.676 16:11:44 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:14.676 16:11:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:14.676 16:11:44 -- common/autotest_common.sh@10 -- # set +x 00:19:14.676 [2024-04-15 16:11:44.536773] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:14.676 16:11:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:14.676 16:11:44 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:19:14.676 16:11:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:14.676 16:11:44 -- common/autotest_common.sh@10 -- # set +x 00:19:14.676 16:11:44 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:14.676 16:11:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:14.676 16:11:44 -- common/autotest_common.sh@10 -- # set +x 00:19:14.676 Malloc1 00:19:14.676 16:11:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:14.676 16:11:44 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:14.676 16:11:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:14.676 16:11:44 -- common/autotest_common.sh@10 -- # set +x 00:19:14.676 16:11:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:14.676 16:11:44 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:14.676 16:11:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:14.676 16:11:44 -- common/autotest_common.sh@10 -- # set +x 00:19:14.676 16:11:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:14.676 16:11:44 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:14.676 16:11:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:14.676 16:11:44 -- common/autotest_common.sh@10 -- # set +x 00:19:14.676 [2024-04-15 16:11:44.635379] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:14.676 16:11:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:14.676 16:11:44 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:14.676 16:11:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:14.676 16:11:44 -- common/autotest_common.sh@10 -- # set +x 00:19:14.963 16:11:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:14.963 16:11:44 -- host/fio.sh@36 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:19:14.963 16:11:44 -- host/fio.sh@39 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:19:14.963 16:11:44 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:19:14.963 16:11:44 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:19:14.963 16:11:44 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:14.963 16:11:44 -- common/autotest_common.sh@1325 -- # local sanitizers 00:19:14.963 16:11:44 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:14.963 16:11:44 -- common/autotest_common.sh@1327 -- # shift 00:19:14.963 16:11:44 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:19:14.963 16:11:44 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:19:14.963 16:11:44 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:14.963 16:11:44 -- common/autotest_common.sh@1331 -- # grep libasan 00:19:14.963 16:11:44 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:19:14.963 16:11:44 -- common/autotest_common.sh@1331 -- # asan_lib= 00:19:14.963 16:11:44 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:19:14.963 16:11:44 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:19:14.963 16:11:44 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:14.963 16:11:44 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:19:14.963 16:11:44 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:19:14.963 16:11:44 -- common/autotest_common.sh@1331 -- # asan_lib= 00:19:14.963 16:11:44 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:19:14.963 16:11:44 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:19:14.963 16:11:44 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:19:14.963 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:19:14.963 fio-3.35 00:19:14.963 Starting 1 thread 00:19:17.520 00:19:17.520 test: (groupid=0, jobs=1): err= 0: pid=87206: Mon Apr 15 16:11:47 2024 00:19:17.520 read: IOPS=9014, BW=35.2MiB/s (36.9MB/s)(70.7MiB/2007msec) 00:19:17.520 slat (nsec): min=1910, max=164399, avg=2144.10, stdev=1700.37 00:19:17.520 clat (usec): min=1542, max=13145, avg=7412.73, stdev=553.96 00:19:17.520 lat (usec): min=1565, max=13147, avg=7414.87, stdev=553.80 00:19:17.520 clat percentiles (usec): 00:19:17.520 | 1.00th=[ 6259], 5.00th=[ 6652], 10.00th=[ 6849], 20.00th=[ 7046], 00:19:17.520 | 30.00th=[ 7177], 40.00th=[ 7308], 50.00th=[ 7373], 60.00th=[ 7504], 00:19:17.520 | 70.00th=[ 7635], 80.00th=[ 7767], 90.00th=[ 8029], 95.00th=[ 8291], 00:19:17.520 | 99.00th=[ 8717], 99.50th=[ 8979], 99.90th=[11207], 99.95th=[12125], 00:19:17.520 | 99.99th=[13173] 00:19:17.520 bw ( KiB/s): min=35560, max=36320, per=99.94%, avg=36036.00, stdev=360.77, samples=4 00:19:17.520 iops : min= 8890, max= 9080, avg=9009.00, stdev=90.19, samples=4 00:19:17.520 write: IOPS=9032, BW=35.3MiB/s (37.0MB/s)(70.8MiB/2007msec); 0 zone resets 00:19:17.520 slat (nsec): min=1971, max=123667, avg=2220.68, stdev=1133.94 00:19:17.520 clat (usec): min=1215, max=12160, avg=6715.19, stdev=495.35 00:19:17.520 lat (usec): min=1222, max=12162, avg=6717.41, stdev=495.26 00:19:17.520 clat percentiles (usec): 00:19:17.520 | 1.00th=[ 5735], 5.00th=[ 6063], 10.00th=[ 6194], 20.00th=[ 6390], 00:19:17.520 | 30.00th=[ 6521], 40.00th=[ 6587], 50.00th=[ 6718], 60.00th=[ 6783], 00:19:17.520 | 70.00th=[ 6915], 80.00th=[ 7046], 90.00th=[ 7242], 95.00th=[ 7439], 00:19:17.520 | 99.00th=[ 7963], 99.50th=[ 8225], 99.90th=[10159], 99.95th=[10945], 00:19:17.520 | 99.99th=[12125] 00:19:17.520 bw ( KiB/s): min=35432, max=36568, per=100.00%, avg=36150.00, stdev=495.16, samples=4 00:19:17.520 iops : min= 8858, max= 9142, avg=9037.50, stdev=123.79, samples=4 00:19:17.520 lat (msec) : 2=0.04%, 4=0.12%, 10=99.67%, 20=0.17% 00:19:17.520 cpu : usr=69.89%, sys=25.22%, ctx=16, majf=0, minf=4 00:19:17.520 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:19:17.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.520 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:17.520 issued rwts: total=18092,18128,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:17.520 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:17.520 00:19:17.520 Run status group 0 (all jobs): 00:19:17.520 READ: bw=35.2MiB/s (36.9MB/s), 35.2MiB/s-35.2MiB/s (36.9MB/s-36.9MB/s), io=70.7MiB (74.1MB), run=2007-2007msec 00:19:17.520 WRITE: bw=35.3MiB/s (37.0MB/s), 35.3MiB/s-35.3MiB/s (37.0MB/s-37.0MB/s), io=70.8MiB (74.3MB), run=2007-2007msec 00:19:17.520 16:11:47 -- host/fio.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:19:17.520 16:11:47 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:19:17.520 16:11:47 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:19:17.520 16:11:47 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:17.520 16:11:47 -- common/autotest_common.sh@1325 -- # local sanitizers 00:19:17.520 16:11:47 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:17.520 16:11:47 -- common/autotest_common.sh@1327 -- # shift 00:19:17.520 16:11:47 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:19:17.520 16:11:47 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:19:17.520 16:11:47 -- common/autotest_common.sh@1331 -- # grep libasan 00:19:17.520 16:11:47 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:17.520 16:11:47 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:19:17.520 16:11:47 -- common/autotest_common.sh@1331 -- # asan_lib= 00:19:17.520 16:11:47 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:19:17.520 16:11:47 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:19:17.520 16:11:47 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:17.520 16:11:47 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:19:17.520 16:11:47 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:19:17.520 16:11:47 -- common/autotest_common.sh@1331 -- # asan_lib= 00:19:17.520 16:11:47 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:19:17.520 16:11:47 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:19:17.520 16:11:47 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:19:17.520 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:19:17.520 fio-3.35 00:19:17.520 Starting 1 thread 00:19:20.054 00:19:20.054 test: (groupid=0, jobs=1): err= 0: pid=87250: Mon Apr 15 16:11:49 2024 00:19:20.054 read: IOPS=8489, BW=133MiB/s (139MB/s)(267MiB/2010msec) 00:19:20.054 slat (usec): min=2, max=108, avg= 3.51, stdev= 1.88 00:19:20.054 clat (usec): min=2481, max=18140, avg=8383.84, stdev=2734.80 00:19:20.054 lat (usec): min=2485, max=18143, avg=8387.35, stdev=2734.92 00:19:20.054 clat percentiles (usec): 00:19:20.054 | 1.00th=[ 3851], 5.00th=[ 4621], 10.00th=[ 5145], 20.00th=[ 5997], 00:19:20.054 | 30.00th=[ 6652], 40.00th=[ 7308], 50.00th=[ 8029], 60.00th=[ 8717], 00:19:20.054 | 70.00th=[ 9634], 80.00th=[10421], 90.00th=[12125], 95.00th=[13435], 00:19:20.054 | 99.00th=[16319], 99.50th=[16909], 99.90th=[17695], 99.95th=[17957], 00:19:20.054 | 99.99th=[18220] 00:19:20.054 bw ( KiB/s): min=63808, max=81632, per=51.60%, avg=70080.00, stdev=8003.28, samples=4 00:19:20.054 iops : min= 3988, max= 5102, avg=4380.00, stdev=500.21, samples=4 00:19:20.054 write: IOPS=4955, BW=77.4MiB/s (81.2MB/s)(143MiB/1841msec); 0 zone resets 00:19:20.054 slat (usec): min=29, max=281, avg=38.78, stdev= 8.03 00:19:20.054 clat (usec): min=4444, max=20708, avg=11751.26, stdev=2310.43 00:19:20.054 lat (usec): min=4480, max=20744, avg=11790.04, stdev=2312.57 00:19:20.054 clat percentiles (usec): 00:19:20.054 | 1.00th=[ 7504], 5.00th=[ 8356], 10.00th=[ 8979], 20.00th=[ 9765], 00:19:20.054 | 30.00th=[10290], 40.00th=[10945], 50.00th=[11469], 60.00th=[12125], 00:19:20.054 | 70.00th=[12780], 80.00th=[13698], 90.00th=[15008], 95.00th=[15926], 00:19:20.054 | 99.00th=[17695], 99.50th=[18220], 99.90th=[20579], 99.95th=[20579], 00:19:20.054 | 99.99th=[20579] 00:19:20.054 bw ( KiB/s): min=65792, max=83712, per=91.59%, avg=72616.00, stdev=7907.79, samples=4 00:19:20.054 iops : min= 4112, max= 5232, avg=4538.50, stdev=494.24, samples=4 00:19:20.054 lat (msec) : 4=0.94%, 10=55.76%, 20=43.26%, 50=0.05% 00:19:20.054 cpu : usr=79.84%, sys=15.58%, ctx=147, majf=0, minf=2 00:19:20.054 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:19:20.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:20.054 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:20.054 issued rwts: total=17063,9123,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:20.054 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:20.054 00:19:20.054 Run status group 0 (all jobs): 00:19:20.054 READ: bw=133MiB/s (139MB/s), 133MiB/s-133MiB/s (139MB/s-139MB/s), io=267MiB (280MB), run=2010-2010msec 00:19:20.054 WRITE: bw=77.4MiB/s (81.2MB/s), 77.4MiB/s-77.4MiB/s (81.2MB/s-81.2MB/s), io=143MiB (149MB), run=1841-1841msec 00:19:20.054 16:11:49 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:20.054 16:11:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.054 16:11:49 -- common/autotest_common.sh@10 -- # set +x 00:19:20.054 16:11:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.054 16:11:49 -- host/fio.sh@47 -- # '[' 1 -eq 1 ']' 00:19:20.054 16:11:49 -- host/fio.sh@49 -- # bdfs=($(get_nvme_bdfs)) 00:19:20.054 16:11:49 -- host/fio.sh@49 -- # get_nvme_bdfs 00:19:20.054 16:11:49 -- common/autotest_common.sh@1499 -- # bdfs=() 00:19:20.054 16:11:49 -- common/autotest_common.sh@1499 -- # local bdfs 00:19:20.054 16:11:49 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:19:20.054 16:11:49 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:20.054 16:11:49 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:19:20.054 16:11:49 -- common/autotest_common.sh@1501 -- # (( 2 == 0 )) 00:19:20.054 16:11:49 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:19:20.054 16:11:49 -- host/fio.sh@50 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 -i 10.0.0.2 00:19:20.054 16:11:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.054 16:11:49 -- common/autotest_common.sh@10 -- # set +x 00:19:20.054 Nvme0n1 00:19:20.054 16:11:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.054 16:11:49 -- host/fio.sh@51 -- # rpc_cmd bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:19:20.054 16:11:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.054 16:11:49 -- common/autotest_common.sh@10 -- # set +x 00:19:20.054 16:11:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.054 16:11:49 -- host/fio.sh@51 -- # ls_guid=3888eb28-160f-4b7e-901d-c917df502660 00:19:20.054 16:11:49 -- host/fio.sh@52 -- # get_lvs_free_mb 3888eb28-160f-4b7e-901d-c917df502660 00:19:20.054 16:11:49 -- common/autotest_common.sh@1350 -- # local lvs_uuid=3888eb28-160f-4b7e-901d-c917df502660 00:19:20.054 16:11:49 -- common/autotest_common.sh@1351 -- # local lvs_info 00:19:20.054 16:11:49 -- common/autotest_common.sh@1352 -- # local fc 00:19:20.054 16:11:49 -- common/autotest_common.sh@1353 -- # local cs 00:19:20.054 16:11:49 -- common/autotest_common.sh@1354 -- # rpc_cmd bdev_lvol_get_lvstores 00:19:20.054 16:11:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.054 16:11:49 -- common/autotest_common.sh@10 -- # set +x 00:19:20.054 16:11:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.054 16:11:49 -- common/autotest_common.sh@1354 -- # lvs_info='[ 00:19:20.054 { 00:19:20.054 "uuid": "3888eb28-160f-4b7e-901d-c917df502660", 00:19:20.054 "name": "lvs_0", 00:19:20.054 "base_bdev": "Nvme0n1", 00:19:20.054 "total_data_clusters": 4, 00:19:20.054 "free_clusters": 4, 00:19:20.054 "block_size": 4096, 00:19:20.054 "cluster_size": 1073741824 00:19:20.054 } 00:19:20.054 ]' 00:19:20.054 16:11:49 -- common/autotest_common.sh@1355 -- # jq '.[] | select(.uuid=="3888eb28-160f-4b7e-901d-c917df502660") .free_clusters' 00:19:20.054 16:11:49 -- common/autotest_common.sh@1355 -- # fc=4 00:19:20.054 16:11:49 -- common/autotest_common.sh@1356 -- # jq '.[] | select(.uuid=="3888eb28-160f-4b7e-901d-c917df502660") .cluster_size' 00:19:20.054 16:11:49 -- common/autotest_common.sh@1356 -- # cs=1073741824 00:19:20.054 16:11:49 -- common/autotest_common.sh@1359 -- # free_mb=4096 00:19:20.054 16:11:49 -- common/autotest_common.sh@1360 -- # echo 4096 00:19:20.054 4096 00:19:20.054 16:11:49 -- host/fio.sh@53 -- # rpc_cmd bdev_lvol_create -l lvs_0 lbd_0 4096 00:19:20.054 16:11:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.054 16:11:49 -- common/autotest_common.sh@10 -- # set +x 00:19:20.054 c53dc843-31a6-4528-bee2-c0290285b577 00:19:20.054 16:11:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.054 16:11:49 -- host/fio.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:19:20.054 16:11:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.054 16:11:49 -- common/autotest_common.sh@10 -- # set +x 00:19:20.054 16:11:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.054 16:11:49 -- host/fio.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:19:20.054 16:11:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.055 16:11:49 -- common/autotest_common.sh@10 -- # set +x 00:19:20.055 16:11:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.055 16:11:49 -- host/fio.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:20.055 16:11:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.055 16:11:49 -- common/autotest_common.sh@10 -- # set +x 00:19:20.055 16:11:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.055 16:11:49 -- host/fio.sh@57 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:19:20.055 16:11:49 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:19:20.055 16:11:49 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:19:20.055 16:11:49 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:20.055 16:11:49 -- common/autotest_common.sh@1325 -- # local sanitizers 00:19:20.055 16:11:49 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:20.055 16:11:49 -- common/autotest_common.sh@1327 -- # shift 00:19:20.055 16:11:49 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:19:20.055 16:11:49 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:19:20.055 16:11:49 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:20.055 16:11:49 -- common/autotest_common.sh@1331 -- # grep libasan 00:19:20.055 16:11:49 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:19:20.055 16:11:49 -- common/autotest_common.sh@1331 -- # asan_lib= 00:19:20.055 16:11:49 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:19:20.055 16:11:49 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:19:20.055 16:11:49 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:20.055 16:11:49 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:19:20.055 16:11:49 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:19:20.055 16:11:50 -- common/autotest_common.sh@1331 -- # asan_lib= 00:19:20.055 16:11:50 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:19:20.055 16:11:50 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:19:20.055 16:11:50 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:19:20.312 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:19:20.312 fio-3.35 00:19:20.312 Starting 1 thread 00:19:22.847 00:19:22.847 test: (groupid=0, jobs=1): err= 0: pid=87329: Mon Apr 15 16:11:52 2024 00:19:22.847 read: IOPS=6702, BW=26.2MiB/s (27.5MB/s)(52.5MiB/2007msec) 00:19:22.847 slat (nsec): min=1635, max=421483, avg=2279.16, stdev=4234.65 00:19:22.847 clat (usec): min=2953, max=17232, avg=9995.64, stdev=823.82 00:19:22.847 lat (usec): min=2970, max=17234, avg=9997.92, stdev=823.42 00:19:22.847 clat percentiles (usec): 00:19:22.847 | 1.00th=[ 8160], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[ 9372], 00:19:22.847 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10159], 00:19:22.847 | 70.00th=[10421], 80.00th=[10552], 90.00th=[10945], 95.00th=[11207], 00:19:22.847 | 99.00th=[11863], 99.50th=[12256], 99.90th=[16188], 99.95th=[16909], 00:19:22.847 | 99.99th=[17171] 00:19:22.847 bw ( KiB/s): min=25992, max=27296, per=99.79%, avg=26752.00, stdev=616.19, samples=4 00:19:22.847 iops : min= 6498, max= 6824, avg=6688.00, stdev=154.05, samples=4 00:19:22.847 write: IOPS=6703, BW=26.2MiB/s (27.5MB/s)(52.6MiB/2007msec); 0 zone resets 00:19:22.847 slat (nsec): min=1695, max=232074, avg=2384.59, stdev=2410.57 00:19:22.847 clat (usec): min=2604, max=16858, avg=9028.77, stdev=749.10 00:19:22.847 lat (usec): min=2621, max=16860, avg=9031.15, stdev=748.86 00:19:22.847 clat percentiles (usec): 00:19:22.847 | 1.00th=[ 7439], 5.00th=[ 7963], 10.00th=[ 8160], 20.00th=[ 8455], 00:19:22.847 | 30.00th=[ 8717], 40.00th=[ 8848], 50.00th=[ 8979], 60.00th=[ 9241], 00:19:22.847 | 70.00th=[ 9372], 80.00th=[ 9634], 90.00th=[ 9896], 95.00th=[10159], 00:19:22.847 | 99.00th=[10683], 99.50th=[10945], 99.90th=[13698], 99.95th=[15139], 00:19:22.847 | 99.99th=[16909] 00:19:22.847 bw ( KiB/s): min=26560, max=27016, per=99.96%, avg=26802.00, stdev=194.03, samples=4 00:19:22.847 iops : min= 6640, max= 6754, avg=6700.50, stdev=48.51, samples=4 00:19:22.847 lat (msec) : 4=0.06%, 10=71.74%, 20=28.20% 00:19:22.847 cpu : usr=73.68%, sys=22.28%, ctx=8, majf=0, minf=4 00:19:22.847 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:19:22.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:22.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:22.847 issued rwts: total=13451,13453,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:22.847 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:22.847 00:19:22.847 Run status group 0 (all jobs): 00:19:22.847 READ: bw=26.2MiB/s (27.5MB/s), 26.2MiB/s-26.2MiB/s (27.5MB/s-27.5MB/s), io=52.5MiB (55.1MB), run=2007-2007msec 00:19:22.847 WRITE: bw=26.2MiB/s (27.5MB/s), 26.2MiB/s-26.2MiB/s (27.5MB/s-27.5MB/s), io=52.6MiB (55.1MB), run=2007-2007msec 00:19:22.847 16:11:52 -- host/fio.sh@59 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:22.847 16:11:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.847 16:11:52 -- common/autotest_common.sh@10 -- # set +x 00:19:22.847 16:11:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.847 16:11:52 -- host/fio.sh@62 -- # rpc_cmd bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:19:22.847 16:11:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.847 16:11:52 -- common/autotest_common.sh@10 -- # set +x 00:19:22.847 16:11:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.847 16:11:52 -- host/fio.sh@62 -- # ls_nested_guid=1ce8fff1-b8ed-4de9-bdf3-5e207574cf64 00:19:22.847 16:11:52 -- host/fio.sh@63 -- # get_lvs_free_mb 1ce8fff1-b8ed-4de9-bdf3-5e207574cf64 00:19:22.847 16:11:52 -- common/autotest_common.sh@1350 -- # local lvs_uuid=1ce8fff1-b8ed-4de9-bdf3-5e207574cf64 00:19:22.847 16:11:52 -- common/autotest_common.sh@1351 -- # local lvs_info 00:19:22.847 16:11:52 -- common/autotest_common.sh@1352 -- # local fc 00:19:22.847 16:11:52 -- common/autotest_common.sh@1353 -- # local cs 00:19:22.847 16:11:52 -- common/autotest_common.sh@1354 -- # rpc_cmd bdev_lvol_get_lvstores 00:19:22.847 16:11:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.847 16:11:52 -- common/autotest_common.sh@10 -- # set +x 00:19:22.847 16:11:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.847 16:11:52 -- common/autotest_common.sh@1354 -- # lvs_info='[ 00:19:22.847 { 00:19:22.847 "uuid": "3888eb28-160f-4b7e-901d-c917df502660", 00:19:22.847 "name": "lvs_0", 00:19:22.847 "base_bdev": "Nvme0n1", 00:19:22.847 "total_data_clusters": 4, 00:19:22.847 "free_clusters": 0, 00:19:22.847 "block_size": 4096, 00:19:22.847 "cluster_size": 1073741824 00:19:22.847 }, 00:19:22.847 { 00:19:22.847 "uuid": "1ce8fff1-b8ed-4de9-bdf3-5e207574cf64", 00:19:22.847 "name": "lvs_n_0", 00:19:22.847 "base_bdev": "c53dc843-31a6-4528-bee2-c0290285b577", 00:19:22.847 "total_data_clusters": 1022, 00:19:22.847 "free_clusters": 1022, 00:19:22.847 "block_size": 4096, 00:19:22.847 "cluster_size": 4194304 00:19:22.847 } 00:19:22.847 ]' 00:19:22.847 16:11:52 -- common/autotest_common.sh@1355 -- # jq '.[] | select(.uuid=="1ce8fff1-b8ed-4de9-bdf3-5e207574cf64") .free_clusters' 00:19:22.847 16:11:52 -- common/autotest_common.sh@1355 -- # fc=1022 00:19:22.848 16:11:52 -- common/autotest_common.sh@1356 -- # jq '.[] | select(.uuid=="1ce8fff1-b8ed-4de9-bdf3-5e207574cf64") .cluster_size' 00:19:22.848 16:11:52 -- common/autotest_common.sh@1356 -- # cs=4194304 00:19:22.848 16:11:52 -- common/autotest_common.sh@1359 -- # free_mb=4088 00:19:22.848 16:11:52 -- common/autotest_common.sh@1360 -- # echo 4088 00:19:22.848 4088 00:19:22.848 16:11:52 -- host/fio.sh@64 -- # rpc_cmd bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:19:22.848 16:11:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.848 16:11:52 -- common/autotest_common.sh@10 -- # set +x 00:19:22.848 4fcb2de1-a510-44a0-85c5-85cba53e75aa 00:19:22.848 16:11:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.848 16:11:52 -- host/fio.sh@65 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:19:22.848 16:11:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.848 16:11:52 -- common/autotest_common.sh@10 -- # set +x 00:19:22.848 16:11:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.848 16:11:52 -- host/fio.sh@66 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:19:22.848 16:11:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.848 16:11:52 -- common/autotest_common.sh@10 -- # set +x 00:19:22.848 16:11:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.848 16:11:52 -- host/fio.sh@67 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:19:22.848 16:11:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.848 16:11:52 -- common/autotest_common.sh@10 -- # set +x 00:19:22.848 16:11:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.848 16:11:52 -- host/fio.sh@68 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:19:22.848 16:11:52 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:19:22.848 16:11:52 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:19:22.848 16:11:52 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:22.848 16:11:52 -- common/autotest_common.sh@1325 -- # local sanitizers 00:19:22.848 16:11:52 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:22.848 16:11:52 -- common/autotest_common.sh@1327 -- # shift 00:19:22.848 16:11:52 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:19:22.848 16:11:52 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:19:22.848 16:11:52 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:22.848 16:11:52 -- common/autotest_common.sh@1331 -- # grep libasan 00:19:22.848 16:11:52 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:19:22.848 16:11:52 -- common/autotest_common.sh@1331 -- # asan_lib= 00:19:22.848 16:11:52 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:19:22.848 16:11:52 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:19:22.848 16:11:52 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:22.848 16:11:52 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:19:22.848 16:11:52 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:19:22.848 16:11:52 -- common/autotest_common.sh@1331 -- # asan_lib= 00:19:22.848 16:11:52 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:19:22.848 16:11:52 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:19:22.848 16:11:52 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:19:22.848 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:19:22.848 fio-3.35 00:19:22.848 Starting 1 thread 00:19:25.380 00:19:25.380 test: (groupid=0, jobs=1): err= 0: pid=87384: Mon Apr 15 16:11:55 2024 00:19:25.380 read: IOPS=6009, BW=23.5MiB/s (24.6MB/s)(47.2MiB/2009msec) 00:19:25.380 slat (nsec): min=1628, max=273512, avg=2350.08, stdev=3266.33 00:19:25.380 clat (usec): min=2903, max=19554, avg=11158.65, stdev=912.80 00:19:25.380 lat (usec): min=2909, max=19556, avg=11161.00, stdev=912.54 00:19:25.380 clat percentiles (usec): 00:19:25.380 | 1.00th=[ 9110], 5.00th=[ 9896], 10.00th=[10159], 20.00th=[10421], 00:19:25.380 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11076], 60.00th=[11338], 00:19:25.380 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12256], 95.00th=[12518], 00:19:25.380 | 99.00th=[13173], 99.50th=[13435], 99.90th=[17695], 99.95th=[17957], 00:19:25.380 | 99.99th=[19530] 00:19:25.380 bw ( KiB/s): min=23008, max=24648, per=99.93%, avg=24024.00, stdev=707.66, samples=4 00:19:25.380 iops : min= 5752, max= 6162, avg=6006.00, stdev=176.91, samples=4 00:19:25.380 write: IOPS=5998, BW=23.4MiB/s (24.6MB/s)(47.1MiB/2009msec); 0 zone resets 00:19:25.380 slat (nsec): min=1671, max=150183, avg=2451.86, stdev=2079.11 00:19:25.380 clat (usec): min=1895, max=16050, avg=10064.61, stdev=846.99 00:19:25.380 lat (usec): min=1904, max=16052, avg=10067.06, stdev=846.89 00:19:25.380 clat percentiles (usec): 00:19:25.380 | 1.00th=[ 8225], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[ 9372], 00:19:25.380 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10290], 00:19:25.380 | 70.00th=[10421], 80.00th=[10683], 90.00th=[11076], 95.00th=[11338], 00:19:25.380 | 99.00th=[11994], 99.50th=[12256], 99.90th=[14615], 99.95th=[15926], 00:19:25.380 | 99.99th=[16057] 00:19:25.380 bw ( KiB/s): min=23768, max=24424, per=99.91%, avg=23970.00, stdev=306.19, samples=4 00:19:25.380 iops : min= 5942, max= 6106, avg=5992.50, stdev=76.55, samples=4 00:19:25.380 lat (msec) : 2=0.01%, 4=0.06%, 10=27.03%, 20=72.91% 00:19:25.380 cpu : usr=73.90%, sys=21.91%, ctx=257, majf=0, minf=4 00:19:25.380 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:19:25.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:25.380 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:25.380 issued rwts: total=12074,12050,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:25.380 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:25.380 00:19:25.380 Run status group 0 (all jobs): 00:19:25.380 READ: bw=23.5MiB/s (24.6MB/s), 23.5MiB/s-23.5MiB/s (24.6MB/s-24.6MB/s), io=47.2MiB (49.5MB), run=2009-2009msec 00:19:25.380 WRITE: bw=23.4MiB/s (24.6MB/s), 23.4MiB/s-23.4MiB/s (24.6MB/s-24.6MB/s), io=47.1MiB (49.4MB), run=2009-2009msec 00:19:25.380 16:11:55 -- host/fio.sh@70 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:19:25.380 16:11:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.380 16:11:55 -- common/autotest_common.sh@10 -- # set +x 00:19:25.380 16:11:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:25.380 16:11:55 -- host/fio.sh@72 -- # sync 00:19:25.380 16:11:55 -- host/fio.sh@74 -- # rpc_cmd bdev_lvol_delete lvs_n_0/lbd_nest_0 00:19:25.380 16:11:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.380 16:11:55 -- common/autotest_common.sh@10 -- # set +x 00:19:25.380 16:11:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:25.380 16:11:55 -- host/fio.sh@75 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_n_0 00:19:25.380 16:11:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.380 16:11:55 -- common/autotest_common.sh@10 -- # set +x 00:19:25.380 16:11:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:25.380 16:11:55 -- host/fio.sh@76 -- # rpc_cmd bdev_lvol_delete lvs_0/lbd_0 00:19:25.380 16:11:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.380 16:11:55 -- common/autotest_common.sh@10 -- # set +x 00:19:25.380 16:11:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:25.380 16:11:55 -- host/fio.sh@77 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_0 00:19:25.380 16:11:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.380 16:11:55 -- common/autotest_common.sh@10 -- # set +x 00:19:25.380 16:11:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:25.380 16:11:55 -- host/fio.sh@78 -- # rpc_cmd bdev_nvme_detach_controller Nvme0 00:19:25.380 16:11:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.380 16:11:55 -- common/autotest_common.sh@10 -- # set +x 00:19:26.316 16:11:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:26.316 16:11:55 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:19:26.316 16:11:55 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:19:26.316 16:11:55 -- host/fio.sh@84 -- # nvmftestfini 00:19:26.316 16:11:55 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:26.316 16:11:55 -- nvmf/common.sh@117 -- # sync 00:19:26.316 16:11:55 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:26.316 16:11:55 -- nvmf/common.sh@120 -- # set +e 00:19:26.316 16:11:55 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:26.316 16:11:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:26.316 rmmod nvme_tcp 00:19:26.316 rmmod nvme_fabrics 00:19:26.316 16:11:56 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:26.316 16:11:56 -- nvmf/common.sh@124 -- # set -e 00:19:26.316 16:11:56 -- nvmf/common.sh@125 -- # return 0 00:19:26.316 16:11:56 -- nvmf/common.sh@478 -- # '[' -n 87141 ']' 00:19:26.316 16:11:56 -- nvmf/common.sh@479 -- # killprocess 87141 00:19:26.317 16:11:56 -- common/autotest_common.sh@936 -- # '[' -z 87141 ']' 00:19:26.317 16:11:56 -- common/autotest_common.sh@940 -- # kill -0 87141 00:19:26.317 16:11:56 -- common/autotest_common.sh@941 -- # uname 00:19:26.317 16:11:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:26.317 16:11:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87141 00:19:26.317 killing process with pid 87141 00:19:26.317 16:11:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:26.317 16:11:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:26.317 16:11:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87141' 00:19:26.317 16:11:56 -- common/autotest_common.sh@955 -- # kill 87141 00:19:26.317 16:11:56 -- common/autotest_common.sh@960 -- # wait 87141 00:19:26.317 16:11:56 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:26.317 16:11:56 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:26.317 16:11:56 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:26.317 16:11:56 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:26.317 16:11:56 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:26.317 16:11:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:26.317 16:11:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:26.317 16:11:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:26.575 16:11:56 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:26.575 ************************************ 00:19:26.575 END TEST nvmf_fio_host 00:19:26.575 ************************************ 00:19:26.575 00:19:26.575 real 0m13.292s 00:19:26.575 user 0m54.426s 00:19:26.576 sys 0m4.134s 00:19:26.576 16:11:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:26.576 16:11:56 -- common/autotest_common.sh@10 -- # set +x 00:19:26.576 16:11:56 -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:19:26.576 16:11:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:26.576 16:11:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:26.576 16:11:56 -- common/autotest_common.sh@10 -- # set +x 00:19:26.576 ************************************ 00:19:26.576 START TEST nvmf_failover 00:19:26.576 ************************************ 00:19:26.576 16:11:56 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:19:26.834 * Looking for test storage... 00:19:26.834 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:26.834 16:11:56 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:26.834 16:11:56 -- nvmf/common.sh@7 -- # uname -s 00:19:26.834 16:11:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:26.834 16:11:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:26.834 16:11:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:26.834 16:11:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:26.834 16:11:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:26.834 16:11:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:26.834 16:11:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:26.834 16:11:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:26.834 16:11:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:26.835 16:11:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:26.835 16:11:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5cf30adc-e0ee-4255-9853-178d7d30983a 00:19:26.835 16:11:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=5cf30adc-e0ee-4255-9853-178d7d30983a 00:19:26.835 16:11:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:26.835 16:11:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:26.835 16:11:56 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:26.835 16:11:56 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:26.835 16:11:56 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:26.835 16:11:56 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:26.835 16:11:56 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:26.835 16:11:56 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:26.835 16:11:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.835 16:11:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.835 16:11:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.835 16:11:56 -- paths/export.sh@5 -- # export PATH 00:19:26.835 16:11:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.835 16:11:56 -- nvmf/common.sh@47 -- # : 0 00:19:26.835 16:11:56 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:26.835 16:11:56 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:26.835 16:11:56 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:26.835 16:11:56 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:26.835 16:11:56 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:26.835 16:11:56 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:26.835 16:11:56 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:26.835 16:11:56 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:26.835 16:11:56 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:26.835 16:11:56 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:26.835 16:11:56 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:26.835 16:11:56 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:26.835 16:11:56 -- host/failover.sh@18 -- # nvmftestinit 00:19:26.835 16:11:56 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:26.835 16:11:56 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:26.835 16:11:56 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:26.835 16:11:56 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:26.835 16:11:56 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:26.835 16:11:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:26.835 16:11:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:26.835 16:11:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:26.835 16:11:56 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:19:26.835 16:11:56 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:19:26.835 16:11:56 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:19:26.835 16:11:56 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:19:26.835 16:11:56 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:19:26.835 16:11:56 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:19:26.835 16:11:56 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:26.835 16:11:56 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:26.835 16:11:56 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:26.835 16:11:56 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:26.835 16:11:56 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:26.835 16:11:56 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:26.835 16:11:56 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:26.835 16:11:56 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:26.835 16:11:56 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:26.835 16:11:56 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:26.835 16:11:56 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:26.835 16:11:56 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:26.835 16:11:56 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:26.835 16:11:56 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:26.835 Cannot find device "nvmf_tgt_br" 00:19:26.835 16:11:56 -- nvmf/common.sh@155 -- # true 00:19:26.835 16:11:56 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:26.835 Cannot find device "nvmf_tgt_br2" 00:19:26.835 16:11:56 -- nvmf/common.sh@156 -- # true 00:19:26.835 16:11:56 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:26.835 16:11:56 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:26.835 Cannot find device "nvmf_tgt_br" 00:19:26.835 16:11:56 -- nvmf/common.sh@158 -- # true 00:19:26.835 16:11:56 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:26.835 Cannot find device "nvmf_tgt_br2" 00:19:26.835 16:11:56 -- nvmf/common.sh@159 -- # true 00:19:26.835 16:11:56 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:26.835 16:11:56 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:26.835 16:11:56 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:26.835 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:26.835 16:11:56 -- nvmf/common.sh@162 -- # true 00:19:26.835 16:11:56 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:26.835 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:26.835 16:11:56 -- nvmf/common.sh@163 -- # true 00:19:26.835 16:11:56 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:26.835 16:11:56 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:26.835 16:11:56 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:26.835 16:11:56 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:26.835 16:11:56 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:26.835 16:11:56 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:27.093 16:11:56 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:27.093 16:11:56 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:27.093 16:11:56 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:27.093 16:11:56 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:27.093 16:11:56 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:27.093 16:11:56 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:27.093 16:11:56 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:27.093 16:11:56 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:27.093 16:11:56 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:27.093 16:11:56 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:27.093 16:11:56 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:27.093 16:11:56 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:27.093 16:11:56 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:27.093 16:11:56 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:27.093 16:11:56 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:27.093 16:11:56 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:27.093 16:11:56 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:27.093 16:11:56 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:27.093 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:27.093 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:19:27.093 00:19:27.093 --- 10.0.0.2 ping statistics --- 00:19:27.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:27.093 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:19:27.093 16:11:56 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:27.093 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:27.093 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:19:27.093 00:19:27.093 --- 10.0.0.3 ping statistics --- 00:19:27.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:27.093 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:19:27.093 16:11:56 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:27.093 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:27.093 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:19:27.093 00:19:27.093 --- 10.0.0.1 ping statistics --- 00:19:27.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:27.093 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:19:27.093 16:11:56 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:27.093 16:11:56 -- nvmf/common.sh@422 -- # return 0 00:19:27.093 16:11:56 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:27.093 16:11:56 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:27.094 16:11:56 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:27.094 16:11:56 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:27.094 16:11:56 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:27.094 16:11:56 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:27.094 16:11:56 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:27.094 16:11:56 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:19:27.094 16:11:56 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:27.094 16:11:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:27.094 16:11:56 -- common/autotest_common.sh@10 -- # set +x 00:19:27.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:27.094 16:11:56 -- nvmf/common.sh@470 -- # nvmfpid=87608 00:19:27.094 16:11:56 -- nvmf/common.sh@471 -- # waitforlisten 87608 00:19:27.094 16:11:56 -- common/autotest_common.sh@817 -- # '[' -z 87608 ']' 00:19:27.094 16:11:56 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:27.094 16:11:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:27.094 16:11:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:27.094 16:11:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:27.094 16:11:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:27.094 16:11:56 -- common/autotest_common.sh@10 -- # set +x 00:19:27.094 [2024-04-15 16:11:57.035285] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:19:27.094 [2024-04-15 16:11:57.035560] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:27.351 [2024-04-15 16:11:57.194636] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:27.351 [2024-04-15 16:11:57.250073] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:27.351 [2024-04-15 16:11:57.250313] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:27.351 [2024-04-15 16:11:57.250515] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:27.352 [2024-04-15 16:11:57.250689] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:27.352 [2024-04-15 16:11:57.250740] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:27.352 [2024-04-15 16:11:57.251025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:27.352 [2024-04-15 16:11:57.251251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:27.352 [2024-04-15 16:11:57.251255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:28.315 16:11:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:28.315 16:11:58 -- common/autotest_common.sh@850 -- # return 0 00:19:28.315 16:11:58 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:28.315 16:11:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:28.315 16:11:58 -- common/autotest_common.sh@10 -- # set +x 00:19:28.315 16:11:58 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:28.315 16:11:58 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:28.573 [2024-04-15 16:11:58.344251] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:28.573 16:11:58 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:28.831 Malloc0 00:19:28.831 16:11:58 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:29.089 16:11:58 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:29.348 16:11:59 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:29.606 [2024-04-15 16:11:59.446303] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:29.606 16:11:59 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:29.863 [2024-04-15 16:11:59.726627] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:29.863 16:11:59 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:19:30.121 [2024-04-15 16:11:59.958854] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:19:30.121 16:11:59 -- host/failover.sh@31 -- # bdevperf_pid=87670 00:19:30.121 16:11:59 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:19:30.121 16:11:59 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:30.121 16:11:59 -- host/failover.sh@34 -- # waitforlisten 87670 /var/tmp/bdevperf.sock 00:19:30.121 16:11:59 -- common/autotest_common.sh@817 -- # '[' -z 87670 ']' 00:19:30.121 16:11:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:30.121 16:11:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:30.121 16:11:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:30.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:30.121 16:11:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:30.121 16:11:59 -- common/autotest_common.sh@10 -- # set +x 00:19:30.378 16:12:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:30.378 16:12:00 -- common/autotest_common.sh@850 -- # return 0 00:19:30.378 16:12:00 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:30.978 NVMe0n1 00:19:30.978 16:12:00 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:31.236 00:19:31.236 16:12:01 -- host/failover.sh@39 -- # run_test_pid=87686 00:19:31.236 16:12:01 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:31.236 16:12:01 -- host/failover.sh@41 -- # sleep 1 00:19:32.176 16:12:02 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:32.435 [2024-04-15 16:12:02.303699] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1280 is same with the state(5) to be set 00:19:32.435 [2024-04-15 16:12:02.303980] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1280 is same with the state(5) to be set 00:19:32.435 [2024-04-15 16:12:02.304151] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1280 is same with the state(5) to be set 00:19:32.435 [2024-04-15 16:12:02.304315] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1280 is same with the state(5) to be set 00:19:32.435 [2024-04-15 16:12:02.304465] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1280 is same with the state(5) to be set 00:19:32.435 [2024-04-15 16:12:02.304628] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1280 is same with the state(5) to be set 00:19:32.435 [2024-04-15 16:12:02.304775] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1280 is same with the state(5) to be set 00:19:32.435 [2024-04-15 16:12:02.304913] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1280 is same with the state(5) to be set 00:19:32.435 [2024-04-15 16:12:02.305044] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1280 is same with the state(5) to be set 00:19:32.435 [2024-04-15 16:12:02.305180] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1280 is same with the state(5) to be set 00:19:32.435 [2024-04-15 16:12:02.305316] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1280 is same with the state(5) to be set 00:19:32.435 [2024-04-15 16:12:02.305463] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1280 is same with the state(5) to be set 00:19:32.435 [2024-04-15 16:12:02.305631] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1280 is same with the state(5) to be set 00:19:32.435 16:12:02 -- host/failover.sh@45 -- # sleep 3 00:19:35.743 16:12:05 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:35.743 00:19:35.743 16:12:05 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:36.309 [2024-04-15 16:12:05.990696] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1960 is same with the state(5) to be set 00:19:36.309 [2024-04-15 16:12:05.990984] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1960 is same with the state(5) to be set 00:19:36.309 [2024-04-15 16:12:05.991101] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1960 is same with the state(5) to be set 00:19:36.309 [2024-04-15 16:12:05.991201] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1960 is same with the state(5) to be set 00:19:36.309 [2024-04-15 16:12:05.991329] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1960 is same with the state(5) to be set 00:19:36.309 [2024-04-15 16:12:05.991404] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1960 is same with the state(5) to be set 00:19:36.309 [2024-04-15 16:12:05.991454] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1960 is same with the state(5) to be set 00:19:36.310 [2024-04-15 16:12:05.991521] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1960 is same with the state(5) to be set 00:19:36.310 [2024-04-15 16:12:05.991588] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1960 is same with the state(5) to be set 00:19:36.310 [2024-04-15 16:12:05.991692] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1960 is same with the state(5) to be set 00:19:36.310 [2024-04-15 16:12:05.991747] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1960 is same with the state(5) to be set 00:19:36.310 [2024-04-15 16:12:05.991841] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1960 is same with the state(5) to be set 00:19:36.310 [2024-04-15 16:12:05.991946] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1960 is same with the state(5) to be set 00:19:36.310 [2024-04-15 16:12:05.992003] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1960 is same with the state(5) to be set 00:19:36.310 [2024-04-15 16:12:05.992085] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1960 is same with the state(5) to be set 00:19:36.310 [2024-04-15 16:12:05.992155] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1960 is same with the state(5) to be set 00:19:36.310 [2024-04-15 16:12:05.992236] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1960 is same with the state(5) to be set 00:19:36.310 [2024-04-15 16:12:05.992291] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1960 is same with the state(5) to be set 00:19:36.310 [2024-04-15 16:12:05.992341] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1960 is same with the state(5) to be set 00:19:36.310 [2024-04-15 16:12:05.992414] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a1960 is same with the state(5) to be set 00:19:36.310 16:12:06 -- host/failover.sh@50 -- # sleep 3 00:19:39.688 16:12:09 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:39.688 [2024-04-15 16:12:09.264446] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:39.688 16:12:09 -- host/failover.sh@55 -- # sleep 1 00:19:40.682 16:12:10 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:19:40.682 [2024-04-15 16:12:10.537750] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f02c0 is same with the state(5) to be set 00:19:40.682 [2024-04-15 16:12:10.538018] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f02c0 is same with the state(5) to be set 00:19:40.682 [2024-04-15 16:12:10.538135] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f02c0 is same with the state(5) to be set 00:19:40.682 [2024-04-15 16:12:10.538233] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f02c0 is same with the state(5) to be set 00:19:40.682 [2024-04-15 16:12:10.538382] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f02c0 is same with the state(5) to be set 00:19:40.682 [2024-04-15 16:12:10.538564] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f02c0 is same with the state(5) to be set 00:19:40.682 [2024-04-15 16:12:10.538685] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f02c0 is same with the state(5) to be set 00:19:40.682 [2024-04-15 16:12:10.538780] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f02c0 is same with the state(5) to be set 00:19:40.682 [2024-04-15 16:12:10.538835] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f02c0 is same with the state(5) to be set 00:19:40.682 [2024-04-15 16:12:10.538917] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f02c0 is same with the state(5) to be set 00:19:40.682 [2024-04-15 16:12:10.539026] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f02c0 is same with the state(5) to be set 00:19:40.682 [2024-04-15 16:12:10.539135] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f02c0 is same with the state(5) to be set 00:19:40.682 [2024-04-15 16:12:10.539231] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f02c0 is same with the state(5) to be set 00:19:40.682 [2024-04-15 16:12:10.539286] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f02c0 is same with the state(5) to be set 00:19:40.682 [2024-04-15 16:12:10.539389] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f02c0 is same with the state(5) to be set 00:19:40.682 [2024-04-15 16:12:10.539452] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f02c0 is same with the state(5) to be set 00:19:40.682 [2024-04-15 16:12:10.539509] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f02c0 is same with the state(5) to be set 00:19:40.682 [2024-04-15 16:12:10.539608] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f02c0 is same with the state(5) to be set 00:19:40.682 [2024-04-15 16:12:10.539709] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f02c0 is same with the state(5) to be set 00:19:40.682 [2024-04-15 16:12:10.539790] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f02c0 is same with the state(5) to be set 00:19:40.682 [2024-04-15 16:12:10.539873] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f02c0 is same with the state(5) to be set 00:19:40.682 [2024-04-15 16:12:10.539924] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f02c0 is same with the state(5) to be set 00:19:40.682 [2024-04-15 16:12:10.539982] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f02c0 is same with the state(5) to be set 00:19:40.682 [2024-04-15 16:12:10.540030] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f02c0 is same with the state(5) to be set 00:19:40.682 [2024-04-15 16:12:10.540100] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f02c0 is same with the state(5) to be set 00:19:40.682 16:12:10 -- host/failover.sh@59 -- # wait 87686 00:19:47.247 0 00:19:47.247 16:12:16 -- host/failover.sh@61 -- # killprocess 87670 00:19:47.247 16:12:16 -- common/autotest_common.sh@936 -- # '[' -z 87670 ']' 00:19:47.247 16:12:16 -- common/autotest_common.sh@940 -- # kill -0 87670 00:19:47.247 16:12:16 -- common/autotest_common.sh@941 -- # uname 00:19:47.247 16:12:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:47.247 16:12:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87670 00:19:47.247 16:12:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:47.247 16:12:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:47.247 16:12:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87670' 00:19:47.247 killing process with pid 87670 00:19:47.247 16:12:16 -- common/autotest_common.sh@955 -- # kill 87670 00:19:47.247 16:12:16 -- common/autotest_common.sh@960 -- # wait 87670 00:19:47.247 16:12:16 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:47.247 [2024-04-15 16:12:00.036127] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:19:47.247 [2024-04-15 16:12:00.036848] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87670 ] 00:19:47.247 [2024-04-15 16:12:00.185176] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:47.247 [2024-04-15 16:12:00.238592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:47.247 Running I/O for 15 seconds... 00:19:47.247 [2024-04-15 16:12:02.304345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:47.247 [2024-04-15 16:12:02.304399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.247 [2024-04-15 16:12:02.304417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:47.247 [2024-04-15 16:12:02.304432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.247 [2024-04-15 16:12:02.304448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:47.247 [2024-04-15 16:12:02.304463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.247 [2024-04-15 16:12:02.304478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:47.247 [2024-04-15 16:12:02.304492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.247 [2024-04-15 16:12:02.304507] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8bd5f0 is same with the state(5) to be set 00:19:47.247 [2024-04-15 16:12:02.305805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:71056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.247 [2024-04-15 16:12:02.305833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.247 [2024-04-15 16:12:02.305860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:71064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.247 [2024-04-15 16:12:02.305879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.247 [2024-04-15 16:12:02.305900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:71072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.247 [2024-04-15 16:12:02.305919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.247 [2024-04-15 16:12:02.305939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:71080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.247 [2024-04-15 16:12:02.305957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.247 [2024-04-15 16:12:02.305978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:71088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.247 [2024-04-15 16:12:02.305997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.247 [2024-04-15 16:12:02.306017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:71096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.247 [2024-04-15 16:12:02.306035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.247 [2024-04-15 16:12:02.306056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:71104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.247 [2024-04-15 16:12:02.306094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.247 [2024-04-15 16:12:02.306115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:71112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.247 [2024-04-15 16:12:02.306134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.247 [2024-04-15 16:12:02.306154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:71120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.247 [2024-04-15 16:12:02.306173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.247 [2024-04-15 16:12:02.306193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:71128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.247 [2024-04-15 16:12:02.306212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.247 [2024-04-15 16:12:02.306232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:71136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.247 [2024-04-15 16:12:02.306251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.247 [2024-04-15 16:12:02.306271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:71144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.247 [2024-04-15 16:12:02.306290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.247 [2024-04-15 16:12:02.306310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:71152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.247 [2024-04-15 16:12:02.306328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.247 [2024-04-15 16:12:02.306349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:71160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.248 [2024-04-15 16:12:02.306368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.248 [2024-04-15 16:12:02.306388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:71488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.248 [2024-04-15 16:12:02.306407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.248 [2024-04-15 16:12:02.306427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:71496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.248 [2024-04-15 16:12:02.306449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.248 [2024-04-15 16:12:02.306470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:71504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.248 [2024-04-15 16:12:02.306488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.248 [2024-04-15 16:12:02.306508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.248 [2024-04-15 16:12:02.306527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.248 [2024-04-15 16:12:02.306547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.248 [2024-04-15 16:12:02.306565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.248 [2024-04-15 16:12:02.306616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:71528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.248 [2024-04-15 16:12:02.306634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.248 [2024-04-15 16:12:02.306654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:71536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.248 [2024-04-15 16:12:02.306672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.248 [2024-04-15 16:12:02.306692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:71544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.248 [2024-04-15 16:12:02.306710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.248 [2024-04-15 16:12:02.306729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:71552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.248 [2024-04-15 16:12:02.306747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.248 [2024-04-15 16:12:02.306766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:71560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.248 [2024-04-15 16:12:02.306784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.248 [2024-04-15 16:12:02.306804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:71568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.248 [2024-04-15 16:12:02.306822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.248 [2024-04-15 16:12:02.306841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.248 [2024-04-15 16:12:02.306859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.248 [2024-04-15 16:12:02.306878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:71584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.248 [2024-04-15 16:12:02.306896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.248 [2024-04-15 16:12:02.306915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:71592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.248 [2024-04-15 16:12:02.306933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.248 [2024-04-15 16:12:02.306953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:71600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.248 [2024-04-15 16:12:02.306970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.248 [2024-04-15 16:12:02.306990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:71608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.248 [2024-04-15 16:12:02.307008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.248 [2024-04-15 16:12:02.307027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:71616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.248 [2024-04-15 16:12:02.307045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.248 [2024-04-15 16:12:02.307065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:71624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.248 [2024-04-15 16:12:02.307085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.248 [2024-04-15 16:12:02.307111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:71632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.248 [2024-04-15 16:12:02.307129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.248 [2024-04-15 16:12:02.307154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:71640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.248 [2024-04-15 16:12:02.307178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.248 [2024-04-15 16:12:02.307197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:71648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.248 [2024-04-15 16:12:02.307213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.248 [2024-04-15 16:12:02.307230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:71656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.248 [2024-04-15 16:12:02.307245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.248 [2024-04-15 16:12:02.307261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.248 [2024-04-15 16:12:02.307277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.248 [2024-04-15 16:12:02.307293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:71672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.248 [2024-04-15 16:12:02.307308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.248 [2024-04-15 16:12:02.307325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:71168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.248 [2024-04-15 16:12:02.307340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.248 [2024-04-15 16:12:02.307356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:71176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.248 [2024-04-15 16:12:02.307371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.248 [2024-04-15 16:12:02.307387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:71184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.248 [2024-04-15 16:12:02.307402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.248 [2024-04-15 16:12:02.307419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:71192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.248 [2024-04-15 16:12:02.307434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.248 [2024-04-15 16:12:02.307451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:71200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.248 [2024-04-15 16:12:02.307466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.248 [2024-04-15 16:12:02.307483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:71208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.248 [2024-04-15 16:12:02.307499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.248 [2024-04-15 16:12:02.307515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:71216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.248 [2024-04-15 16:12:02.307536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.248 [2024-04-15 16:12:02.307553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:71224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.248 [2024-04-15 16:12:02.307567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.248 [2024-04-15 16:12:02.307593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:71680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.248 [2024-04-15 16:12:02.307609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.248 [2024-04-15 16:12:02.307626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.248 [2024-04-15 16:12:02.307642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.248 [2024-04-15 16:12:02.307659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:71696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.248 [2024-04-15 16:12:02.307674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.248 [2024-04-15 16:12:02.307691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:71704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.248 [2024-04-15 16:12:02.307707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.248 [2024-04-15 16:12:02.307723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.248 [2024-04-15 16:12:02.307738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.248 [2024-04-15 16:12:02.307755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:71720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.248 [2024-04-15 16:12:02.307770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.248 [2024-04-15 16:12:02.307786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:71728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.248 [2024-04-15 16:12:02.307801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.248 [2024-04-15 16:12:02.307817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:71736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.249 [2024-04-15 16:12:02.307835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.249 [2024-04-15 16:12:02.307852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:71744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.249 [2024-04-15 16:12:02.307866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.249 [2024-04-15 16:12:02.307883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:71752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.249 [2024-04-15 16:12:02.307898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.249 [2024-04-15 16:12:02.307915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:71760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.249 [2024-04-15 16:12:02.307930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.249 [2024-04-15 16:12:02.307952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.249 [2024-04-15 16:12:02.307967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.249 [2024-04-15 16:12:02.308000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.249 [2024-04-15 16:12:02.308015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.249 [2024-04-15 16:12:02.308032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:71784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.249 [2024-04-15 16:12:02.308048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.249 [2024-04-15 16:12:02.308065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:71792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.249 [2024-04-15 16:12:02.308080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.249 [2024-04-15 16:12:02.308097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:71800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.249 [2024-04-15 16:12:02.308112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.249 [2024-04-15 16:12:02.308129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:71232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.249 [2024-04-15 16:12:02.308145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.249 [2024-04-15 16:12:02.308162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:71240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.249 [2024-04-15 16:12:02.308179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.249 [2024-04-15 16:12:02.308196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:71248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.249 [2024-04-15 16:12:02.308211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.249 [2024-04-15 16:12:02.308229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:71256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.249 [2024-04-15 16:12:02.308244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.249 [2024-04-15 16:12:02.308261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:71264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.249 [2024-04-15 16:12:02.308277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.249 [2024-04-15 16:12:02.308304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:71272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.249 [2024-04-15 16:12:02.308319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.249 [2024-04-15 16:12:02.308336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:71280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.249 [2024-04-15 16:12:02.308350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.249 [2024-04-15 16:12:02.308367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:71288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.249 [2024-04-15 16:12:02.308389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.249 [2024-04-15 16:12:02.308406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:71808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.249 [2024-04-15 16:12:02.308420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.249 [2024-04-15 16:12:02.308437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:71816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.249 [2024-04-15 16:12:02.308452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.249 [2024-04-15 16:12:02.308468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:71824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.249 [2024-04-15 16:12:02.308483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.249 [2024-04-15 16:12:02.308500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.249 [2024-04-15 16:12:02.308514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.249 [2024-04-15 16:12:02.308531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:71840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.249 [2024-04-15 16:12:02.308546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.249 [2024-04-15 16:12:02.308562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:71848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.249 [2024-04-15 16:12:02.308577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.249 [2024-04-15 16:12:02.308593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:71856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.249 [2024-04-15 16:12:02.308616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.249 [2024-04-15 16:12:02.308633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:71864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.249 [2024-04-15 16:12:02.308648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.249 [2024-04-15 16:12:02.308664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:71872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.249 [2024-04-15 16:12:02.308679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.249 [2024-04-15 16:12:02.308696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:71880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.249 [2024-04-15 16:12:02.308712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.249 [2024-04-15 16:12:02.308729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:71888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.249 [2024-04-15 16:12:02.308744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.249 [2024-04-15 16:12:02.308761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:71896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.249 [2024-04-15 16:12:02.308775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.249 [2024-04-15 16:12:02.308792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:71904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.249 [2024-04-15 16:12:02.308812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.249 [2024-04-15 16:12:02.308828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:71912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.249 [2024-04-15 16:12:02.308843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.249 [2024-04-15 16:12:02.308860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:71920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.249 [2024-04-15 16:12:02.308875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.249 [2024-04-15 16:12:02.308892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:71928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.249 [2024-04-15 16:12:02.308909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.249 [2024-04-15 16:12:02.308926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:71936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.249 [2024-04-15 16:12:02.308941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.249 [2024-04-15 16:12:02.308958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:71944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.249 [2024-04-15 16:12:02.308972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.249 [2024-04-15 16:12:02.308989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:71952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.249 [2024-04-15 16:12:02.309004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.249 [2024-04-15 16:12:02.309020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:71960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.249 [2024-04-15 16:12:02.309035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.249 [2024-04-15 16:12:02.309051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:71296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.249 [2024-04-15 16:12:02.309066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.249 [2024-04-15 16:12:02.309083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:71304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.249 [2024-04-15 16:12:02.309098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.249 [2024-04-15 16:12:02.309115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:71312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.249 [2024-04-15 16:12:02.309130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.249 [2024-04-15 16:12:02.309146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:71320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.250 [2024-04-15 16:12:02.309161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.250 [2024-04-15 16:12:02.309178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:71328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.250 [2024-04-15 16:12:02.309193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.250 [2024-04-15 16:12:02.309214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:71336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.250 [2024-04-15 16:12:02.309231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.250 [2024-04-15 16:12:02.309247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:71344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.250 [2024-04-15 16:12:02.309262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.250 [2024-04-15 16:12:02.309279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:71352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.250 [2024-04-15 16:12:02.309293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.250 [2024-04-15 16:12:02.309310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:71360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.250 [2024-04-15 16:12:02.309325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.250 [2024-04-15 16:12:02.309342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:71368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.250 [2024-04-15 16:12:02.309358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.250 [2024-04-15 16:12:02.309375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:71376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.250 [2024-04-15 16:12:02.309390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.250 [2024-04-15 16:12:02.309407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:71384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.250 [2024-04-15 16:12:02.309423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.250 [2024-04-15 16:12:02.309440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:71392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.250 [2024-04-15 16:12:02.309465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.250 [2024-04-15 16:12:02.309499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:71400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.250 [2024-04-15 16:12:02.309515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.250 [2024-04-15 16:12:02.309532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:71408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.250 [2024-04-15 16:12:02.309547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.250 [2024-04-15 16:12:02.309565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:71416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.250 [2024-04-15 16:12:02.309581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.250 [2024-04-15 16:12:02.309606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:71968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.250 [2024-04-15 16:12:02.309621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.250 [2024-04-15 16:12:02.309638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:71976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.250 [2024-04-15 16:12:02.309660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.250 [2024-04-15 16:12:02.309677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:71984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.250 [2024-04-15 16:12:02.309692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.250 [2024-04-15 16:12:02.309710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:71992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.250 [2024-04-15 16:12:02.309725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.250 [2024-04-15 16:12:02.309742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:72000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.250 [2024-04-15 16:12:02.309757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.250 [2024-04-15 16:12:02.309774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:72008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.250 [2024-04-15 16:12:02.309791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.250 [2024-04-15 16:12:02.309808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:72016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.250 [2024-04-15 16:12:02.309823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.250 [2024-04-15 16:12:02.309840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:72024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.250 [2024-04-15 16:12:02.309855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.250 [2024-04-15 16:12:02.309873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:72032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.250 [2024-04-15 16:12:02.309889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.250 [2024-04-15 16:12:02.309906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:72040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.250 [2024-04-15 16:12:02.309921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.250 [2024-04-15 16:12:02.309938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:72048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.250 [2024-04-15 16:12:02.309953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.250 [2024-04-15 16:12:02.309971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:72056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.250 [2024-04-15 16:12:02.309987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.250 [2024-04-15 16:12:02.310004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:72064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.250 [2024-04-15 16:12:02.310019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.250 [2024-04-15 16:12:02.310036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:72072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.250 [2024-04-15 16:12:02.310052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.250 [2024-04-15 16:12:02.310074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:71424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.250 [2024-04-15 16:12:02.310090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.250 [2024-04-15 16:12:02.310107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:71432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.250 [2024-04-15 16:12:02.310123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.250 [2024-04-15 16:12:02.310140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:71440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.250 [2024-04-15 16:12:02.310155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.250 [2024-04-15 16:12:02.310173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:71448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.250 [2024-04-15 16:12:02.310188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.250 [2024-04-15 16:12:02.310205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:71456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.250 [2024-04-15 16:12:02.310221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.250 [2024-04-15 16:12:02.310238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:71464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.250 [2024-04-15 16:12:02.310254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.250 [2024-04-15 16:12:02.310271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.250 [2024-04-15 16:12:02.310286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.250 [2024-04-15 16:12:02.310303] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e9400 is same with the state(5) to be set 00:19:47.250 [2024-04-15 16:12:02.310325] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:47.250 [2024-04-15 16:12:02.310337] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:47.250 [2024-04-15 16:12:02.310349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71480 len:8 PRP1 0x0 PRP2 0x0 00:19:47.250 [2024-04-15 16:12:02.310364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.250 [2024-04-15 16:12:02.310419] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8e9400 was disconnected and freed. reset controller. 00:19:47.250 [2024-04-15 16:12:02.310437] bdev_nvme.c:1853:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:19:47.250 [2024-04-15 16:12:02.310453] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:47.250 [2024-04-15 16:12:02.313778] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:47.250 [2024-04-15 16:12:02.313828] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8bd5f0 (9): Bad file descriptor 00:19:47.250 [2024-04-15 16:12:02.347039] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:47.250 [2024-04-15 16:12:05.992518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:100568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.250 [2024-04-15 16:12:05.992567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.250 [2024-04-15 16:12:05.992620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:100576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.251 [2024-04-15 16:12:05.992637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.251 [2024-04-15 16:12:05.992654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:100584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.251 [2024-04-15 16:12:05.992669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.251 [2024-04-15 16:12:05.992686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:100592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.251 [2024-04-15 16:12:05.992701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.251 [2024-04-15 16:12:05.992717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:100600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.251 [2024-04-15 16:12:05.992732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.251 [2024-04-15 16:12:05.992749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:100608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.251 [2024-04-15 16:12:05.992764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.251 [2024-04-15 16:12:05.992781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.251 [2024-04-15 16:12:05.992795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.251 [2024-04-15 16:12:05.992812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:100624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.251 [2024-04-15 16:12:05.992827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.251 [2024-04-15 16:12:05.992843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.251 [2024-04-15 16:12:05.992858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.251 [2024-04-15 16:12:05.992875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:100640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.251 [2024-04-15 16:12:05.992889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.251 [2024-04-15 16:12:05.992906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:100648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.251 [2024-04-15 16:12:05.992921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.251 [2024-04-15 16:12:05.992937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:100656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.251 [2024-04-15 16:12:05.992952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.251 [2024-04-15 16:12:05.992968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:100664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.251 [2024-04-15 16:12:05.992982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.251 [2024-04-15 16:12:05.992999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:100672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.251 [2024-04-15 16:12:05.993014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.251 [2024-04-15 16:12:05.993037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:100680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.251 [2024-04-15 16:12:05.993052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.251 [2024-04-15 16:12:05.993068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:100688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.251 [2024-04-15 16:12:05.993083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.251 [2024-04-15 16:12:05.993099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:100696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.251 [2024-04-15 16:12:05.993114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.251 [2024-04-15 16:12:05.993131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:100704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.251 [2024-04-15 16:12:05.993145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.251 [2024-04-15 16:12:05.993162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:100712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.251 [2024-04-15 16:12:05.993179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.251 [2024-04-15 16:12:05.993197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:100720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.251 [2024-04-15 16:12:05.993211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.251 [2024-04-15 16:12:05.993228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:100728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.251 [2024-04-15 16:12:05.993243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.251 [2024-04-15 16:12:05.993259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:100992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.251 [2024-04-15 16:12:05.993274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.251 [2024-04-15 16:12:05.993291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.251 [2024-04-15 16:12:05.993305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.251 [2024-04-15 16:12:05.993322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:101008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.251 [2024-04-15 16:12:05.993337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.251 [2024-04-15 16:12:05.993353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:101016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.251 [2024-04-15 16:12:05.993368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.251 [2024-04-15 16:12:05.993384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:101024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.251 [2024-04-15 16:12:05.993399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.251 [2024-04-15 16:12:05.993416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:101032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.251 [2024-04-15 16:12:05.993436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.251 [2024-04-15 16:12:05.993452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:101040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.251 [2024-04-15 16:12:05.993476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.251 [2024-04-15 16:12:05.993493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:101048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.251 [2024-04-15 16:12:05.993508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.251 [2024-04-15 16:12:05.993524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:101056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.251 [2024-04-15 16:12:05.993539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.251 [2024-04-15 16:12:05.993556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:101064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.251 [2024-04-15 16:12:05.993571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.251 [2024-04-15 16:12:05.993595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:101072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.251 [2024-04-15 16:12:05.993610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.251 [2024-04-15 16:12:05.993627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:101080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.251 [2024-04-15 16:12:05.993642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.251 [2024-04-15 16:12:05.993658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:101088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.252 [2024-04-15 16:12:05.993673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.252 [2024-04-15 16:12:05.993689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:101096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.252 [2024-04-15 16:12:05.993704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.252 [2024-04-15 16:12:05.993721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.252 [2024-04-15 16:12:05.993735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.252 [2024-04-15 16:12:05.993752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:101112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.252 [2024-04-15 16:12:05.993766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.252 [2024-04-15 16:12:05.993783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:101120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.252 [2024-04-15 16:12:05.993798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.252 [2024-04-15 16:12:05.993815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:101128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.252 [2024-04-15 16:12:05.993829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.252 [2024-04-15 16:12:05.993851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:101136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.252 [2024-04-15 16:12:05.993866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.252 [2024-04-15 16:12:05.993882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:101144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.252 [2024-04-15 16:12:05.993897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.252 [2024-04-15 16:12:05.993914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:101152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.252 [2024-04-15 16:12:05.993928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.252 [2024-04-15 16:12:05.993945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:100736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.252 [2024-04-15 16:12:05.993960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.252 [2024-04-15 16:12:05.993977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:100744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.252 [2024-04-15 16:12:05.993991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.252 [2024-04-15 16:12:05.994008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:100752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.252 [2024-04-15 16:12:05.994023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.252 [2024-04-15 16:12:05.994040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:100760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.252 [2024-04-15 16:12:05.994055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.252 [2024-04-15 16:12:05.994071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:100768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.252 [2024-04-15 16:12:05.994086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.252 [2024-04-15 16:12:05.994103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:100776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.252 [2024-04-15 16:12:05.994117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.252 [2024-04-15 16:12:05.994134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:100784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.252 [2024-04-15 16:12:05.994148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.252 [2024-04-15 16:12:05.994165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.252 [2024-04-15 16:12:05.994180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.252 [2024-04-15 16:12:05.994197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:101160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.252 [2024-04-15 16:12:05.994212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.252 [2024-04-15 16:12:05.994228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:101168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.252 [2024-04-15 16:12:05.994248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.252 [2024-04-15 16:12:05.994265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:101176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.252 [2024-04-15 16:12:05.994280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.252 [2024-04-15 16:12:05.994296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:101184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.252 [2024-04-15 16:12:05.994310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.252 [2024-04-15 16:12:05.994327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:101192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.252 [2024-04-15 16:12:05.994342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.252 [2024-04-15 16:12:05.994358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:101200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.252 [2024-04-15 16:12:05.994373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.252 [2024-04-15 16:12:05.994389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:101208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.252 [2024-04-15 16:12:05.994404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.252 [2024-04-15 16:12:05.994420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:101216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.252 [2024-04-15 16:12:05.994435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.252 [2024-04-15 16:12:05.994451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:101224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.252 [2024-04-15 16:12:05.994466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.252 [2024-04-15 16:12:05.994482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:101232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.252 [2024-04-15 16:12:05.994497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.252 [2024-04-15 16:12:05.994513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:101240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.252 [2024-04-15 16:12:05.994528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.252 [2024-04-15 16:12:05.994544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.252 [2024-04-15 16:12:05.994559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.252 [2024-04-15 16:12:05.994585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:101256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.252 [2024-04-15 16:12:05.994600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.252 [2024-04-15 16:12:05.994617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:101264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.252 [2024-04-15 16:12:05.994632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.252 [2024-04-15 16:12:05.994654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:101272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.252 [2024-04-15 16:12:05.994669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.252 [2024-04-15 16:12:05.994685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:101280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.252 [2024-04-15 16:12:05.994700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.252 [2024-04-15 16:12:05.994717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:100800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.252 [2024-04-15 16:12:05.994732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.252 [2024-04-15 16:12:05.994749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:100808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.252 [2024-04-15 16:12:05.994763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.252 [2024-04-15 16:12:05.994780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:100816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.252 [2024-04-15 16:12:05.994795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.252 [2024-04-15 16:12:05.994812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:100824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.252 [2024-04-15 16:12:05.994826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.252 [2024-04-15 16:12:05.994843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:100832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.252 [2024-04-15 16:12:05.994857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.252 [2024-04-15 16:12:05.994874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:100840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.252 [2024-04-15 16:12:05.994889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.252 [2024-04-15 16:12:05.994906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:100848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.252 [2024-04-15 16:12:05.994920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.252 [2024-04-15 16:12:05.994937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:100856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.252 [2024-04-15 16:12:05.994951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.253 [2024-04-15 16:12:05.994968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:101288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.253 [2024-04-15 16:12:05.994983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.253 [2024-04-15 16:12:05.995000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.253 [2024-04-15 16:12:05.995015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.253 [2024-04-15 16:12:05.995031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:101304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.253 [2024-04-15 16:12:05.995051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.253 [2024-04-15 16:12:05.995068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:101312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.253 [2024-04-15 16:12:05.995082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.253 [2024-04-15 16:12:05.995099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:101320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.253 [2024-04-15 16:12:05.995113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.253 [2024-04-15 16:12:05.995130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:101328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.253 [2024-04-15 16:12:05.995145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.253 [2024-04-15 16:12:05.995161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:101336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.253 [2024-04-15 16:12:05.995176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.253 [2024-04-15 16:12:05.995192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:101344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.253 [2024-04-15 16:12:05.995208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.253 [2024-04-15 16:12:05.995225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:101352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.253 [2024-04-15 16:12:05.995239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.253 [2024-04-15 16:12:05.995256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:101360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.253 [2024-04-15 16:12:05.995271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.253 [2024-04-15 16:12:05.995287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:101368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.253 [2024-04-15 16:12:05.995302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.253 [2024-04-15 16:12:05.995318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:101376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.253 [2024-04-15 16:12:05.995333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.253 [2024-04-15 16:12:05.995349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.253 [2024-04-15 16:12:05.995364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.253 [2024-04-15 16:12:05.995381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:101392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.253 [2024-04-15 16:12:05.995395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.253 [2024-04-15 16:12:05.995412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:101400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.253 [2024-04-15 16:12:05.995427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.253 [2024-04-15 16:12:05.995443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:101408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.253 [2024-04-15 16:12:05.995462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.253 [2024-04-15 16:12:05.995479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:101416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.253 [2024-04-15 16:12:05.995494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.253 [2024-04-15 16:12:05.995511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:101424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.253 [2024-04-15 16:12:05.995525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.253 [2024-04-15 16:12:05.995542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:101432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.253 [2024-04-15 16:12:05.995556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.253 [2024-04-15 16:12:05.995581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:101440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.253 [2024-04-15 16:12:05.995597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.253 [2024-04-15 16:12:05.995613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:100864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.253 [2024-04-15 16:12:05.995628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.253 [2024-04-15 16:12:05.995644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:100872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.253 [2024-04-15 16:12:05.995659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.253 [2024-04-15 16:12:05.995676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:100880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.253 [2024-04-15 16:12:05.995690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.253 [2024-04-15 16:12:05.995707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:100888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.253 [2024-04-15 16:12:05.995722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.253 [2024-04-15 16:12:05.995739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:100896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.253 [2024-04-15 16:12:05.995753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.253 [2024-04-15 16:12:05.995770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:100904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.253 [2024-04-15 16:12:05.995785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.253 [2024-04-15 16:12:05.995802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:100912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.253 [2024-04-15 16:12:05.995817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.253 [2024-04-15 16:12:05.995833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:100920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.253 [2024-04-15 16:12:05.995848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.253 [2024-04-15 16:12:05.995873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:101448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.253 [2024-04-15 16:12:05.995887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.253 [2024-04-15 16:12:05.995904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:101456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.253 [2024-04-15 16:12:05.995921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.253 [2024-04-15 16:12:05.995937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:101464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.253 [2024-04-15 16:12:05.995952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.253 [2024-04-15 16:12:05.995968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.253 [2024-04-15 16:12:05.995983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.253 [2024-04-15 16:12:05.996000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:101480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.253 [2024-04-15 16:12:05.996014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.253 [2024-04-15 16:12:05.996031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:101488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.253 [2024-04-15 16:12:05.996045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.253 [2024-04-15 16:12:05.996061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:101496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.253 [2024-04-15 16:12:05.996076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.253 [2024-04-15 16:12:05.996093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:101504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.253 [2024-04-15 16:12:05.996107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.253 [2024-04-15 16:12:05.996124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:101512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.253 [2024-04-15 16:12:05.996139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.253 [2024-04-15 16:12:05.996156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:101520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.253 [2024-04-15 16:12:05.996170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.253 [2024-04-15 16:12:05.996187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:101528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.254 [2024-04-15 16:12:05.996201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.254 [2024-04-15 16:12:05.996218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:101536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.254 [2024-04-15 16:12:05.996234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.254 [2024-04-15 16:12:05.996251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:101544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.254 [2024-04-15 16:12:05.996270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.254 [2024-04-15 16:12:05.996287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:101552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.254 [2024-04-15 16:12:05.996302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.254 [2024-04-15 16:12:05.996319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.254 [2024-04-15 16:12:05.996333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.254 [2024-04-15 16:12:05.996350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:101568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.254 [2024-04-15 16:12:05.996365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.254 [2024-04-15 16:12:05.996381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:101576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.254 [2024-04-15 16:12:05.996396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.254 [2024-04-15 16:12:05.996414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:101584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.254 [2024-04-15 16:12:05.996429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.254 [2024-04-15 16:12:05.996446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:100928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.254 [2024-04-15 16:12:05.996460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.254 [2024-04-15 16:12:05.996477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:100936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.254 [2024-04-15 16:12:05.996492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.254 [2024-04-15 16:12:05.996508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.254 [2024-04-15 16:12:05.996523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.254 [2024-04-15 16:12:05.996539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:100952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.254 [2024-04-15 16:12:05.996554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.254 [2024-04-15 16:12:05.996571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:100960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.254 [2024-04-15 16:12:05.996593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.254 [2024-04-15 16:12:05.996610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:100968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.254 [2024-04-15 16:12:05.996625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.254 [2024-04-15 16:12:05.996641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:100976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.254 [2024-04-15 16:12:05.996656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.254 [2024-04-15 16:12:05.996677] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98750 is same with the state(5) to be set 00:19:47.254 [2024-04-15 16:12:05.996696] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:47.254 [2024-04-15 16:12:05.996707] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:47.254 [2024-04-15 16:12:05.996719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100984 len:8 PRP1 0x0 PRP2 0x0 00:19:47.254 [2024-04-15 16:12:05.996734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.254 [2024-04-15 16:12:05.996791] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa98750 was disconnected and freed. reset controller. 00:19:47.254 [2024-04-15 16:12:05.996809] bdev_nvme.c:1853:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:19:47.254 [2024-04-15 16:12:05.996860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:47.254 [2024-04-15 16:12:05.996878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.254 [2024-04-15 16:12:05.996894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:47.254 [2024-04-15 16:12:05.996909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.254 [2024-04-15 16:12:05.996924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:47.254 [2024-04-15 16:12:05.996939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.254 [2024-04-15 16:12:05.996954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:47.254 [2024-04-15 16:12:05.996969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.254 [2024-04-15 16:12:05.996984] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:47.254 [2024-04-15 16:12:05.997029] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8bd5f0 (9): Bad file descriptor 00:19:47.254 [2024-04-15 16:12:06.000271] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:47.254 [2024-04-15 16:12:06.032119] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:47.254 [2024-04-15 16:12:10.540197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:69688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.254 [2024-04-15 16:12:10.540248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.254 [2024-04-15 16:12:10.540274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:69696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.254 [2024-04-15 16:12:10.540289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.254 [2024-04-15 16:12:10.540305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:69704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.254 [2024-04-15 16:12:10.540318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.254 [2024-04-15 16:12:10.540334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:69712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.254 [2024-04-15 16:12:10.540347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.254 [2024-04-15 16:12:10.540378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:69720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.254 [2024-04-15 16:12:10.540392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.254 [2024-04-15 16:12:10.540408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:69728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.254 [2024-04-15 16:12:10.540421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.254 [2024-04-15 16:12:10.540437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:69736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.254 [2024-04-15 16:12:10.540450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.254 [2024-04-15 16:12:10.540465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:69744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.254 [2024-04-15 16:12:10.540479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.254 [2024-04-15 16:12:10.540495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.254 [2024-04-15 16:12:10.540508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.254 [2024-04-15 16:12:10.540523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:69760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.254 [2024-04-15 16:12:10.540537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.254 [2024-04-15 16:12:10.540553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:69768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.254 [2024-04-15 16:12:10.540566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.254 [2024-04-15 16:12:10.540591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:69776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.254 [2024-04-15 16:12:10.540605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.254 [2024-04-15 16:12:10.540621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:69784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.254 [2024-04-15 16:12:10.540634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.254 [2024-04-15 16:12:10.540650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.254 [2024-04-15 16:12:10.540664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.254 [2024-04-15 16:12:10.540679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:69800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.254 [2024-04-15 16:12:10.540693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.255 [2024-04-15 16:12:10.540709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:69808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.255 [2024-04-15 16:12:10.540722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.255 [2024-04-15 16:12:10.540738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:69816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.255 [2024-04-15 16:12:10.540758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.255 [2024-04-15 16:12:10.540775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:69824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.255 [2024-04-15 16:12:10.540790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.255 [2024-04-15 16:12:10.540805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:69832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.255 [2024-04-15 16:12:10.540819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.255 [2024-04-15 16:12:10.540835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:69840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.255 [2024-04-15 16:12:10.540849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.255 [2024-04-15 16:12:10.540864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:69848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.255 [2024-04-15 16:12:10.540879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.255 [2024-04-15 16:12:10.540894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:69856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.255 [2024-04-15 16:12:10.540908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.255 [2024-04-15 16:12:10.540923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:69864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.255 [2024-04-15 16:12:10.540937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.255 [2024-04-15 16:12:10.540952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:69872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.255 [2024-04-15 16:12:10.540966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.255 [2024-04-15 16:12:10.540981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:69880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.255 [2024-04-15 16:12:10.540995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.255 [2024-04-15 16:12:10.541010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:69888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.255 [2024-04-15 16:12:10.541024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.255 [2024-04-15 16:12:10.541040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.255 [2024-04-15 16:12:10.541054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.255 [2024-04-15 16:12:10.541069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:70224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.255 [2024-04-15 16:12:10.541083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.255 [2024-04-15 16:12:10.541099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:70232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.255 [2024-04-15 16:12:10.541112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.255 [2024-04-15 16:12:10.541128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:70240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.255 [2024-04-15 16:12:10.541147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.255 [2024-04-15 16:12:10.541163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:70248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.255 [2024-04-15 16:12:10.541177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.255 [2024-04-15 16:12:10.541192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:70256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.255 [2024-04-15 16:12:10.541206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.255 [2024-04-15 16:12:10.541222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:70264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.255 [2024-04-15 16:12:10.541235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.255 [2024-04-15 16:12:10.541252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:70272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.255 [2024-04-15 16:12:10.541266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.255 [2024-04-15 16:12:10.541282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:69896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.255 [2024-04-15 16:12:10.541295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.255 [2024-04-15 16:12:10.541311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:69904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.255 [2024-04-15 16:12:10.541325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.255 [2024-04-15 16:12:10.541340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:69912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.255 [2024-04-15 16:12:10.541354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.255 [2024-04-15 16:12:10.541369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:69920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.255 [2024-04-15 16:12:10.541383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.255 [2024-04-15 16:12:10.541399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:69928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.255 [2024-04-15 16:12:10.541413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.255 [2024-04-15 16:12:10.541428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:69936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.255 [2024-04-15 16:12:10.541442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.255 [2024-04-15 16:12:10.541457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:69944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.255 [2024-04-15 16:12:10.541479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.255 [2024-04-15 16:12:10.541512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:69952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.255 [2024-04-15 16:12:10.541527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.255 [2024-04-15 16:12:10.541549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:69960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.255 [2024-04-15 16:12:10.541564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.255 [2024-04-15 16:12:10.541581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:69968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.255 [2024-04-15 16:12:10.541604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.255 [2024-04-15 16:12:10.541620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:69976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.255 [2024-04-15 16:12:10.541635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.255 [2024-04-15 16:12:10.541651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:69984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.255 [2024-04-15 16:12:10.541666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.255 [2024-04-15 16:12:10.541683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:69992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.255 [2024-04-15 16:12:10.541697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.255 [2024-04-15 16:12:10.541714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:70000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.255 [2024-04-15 16:12:10.541729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.255 [2024-04-15 16:12:10.541745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:70008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.255 [2024-04-15 16:12:10.541760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.255 [2024-04-15 16:12:10.541778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:70016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.255 [2024-04-15 16:12:10.541793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.255 [2024-04-15 16:12:10.541809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:70280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.255 [2024-04-15 16:12:10.541824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.255 [2024-04-15 16:12:10.541840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:70288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.255 [2024-04-15 16:12:10.541855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.255 [2024-04-15 16:12:10.541872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:70296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.255 [2024-04-15 16:12:10.541887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.255 [2024-04-15 16:12:10.541903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:70304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.256 [2024-04-15 16:12:10.541918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.256 [2024-04-15 16:12:10.541935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:70312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.256 [2024-04-15 16:12:10.541955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.256 [2024-04-15 16:12:10.541971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:70320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.256 [2024-04-15 16:12:10.541986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.256 [2024-04-15 16:12:10.542002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:70328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.256 [2024-04-15 16:12:10.542017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.256 [2024-04-15 16:12:10.542034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:70336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.256 [2024-04-15 16:12:10.542049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.256 [2024-04-15 16:12:10.542065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:70344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.256 [2024-04-15 16:12:10.542080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.256 [2024-04-15 16:12:10.542096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:70352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.256 [2024-04-15 16:12:10.542112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.256 [2024-04-15 16:12:10.542128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:70360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.256 [2024-04-15 16:12:10.542143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.256 [2024-04-15 16:12:10.542160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:70368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.256 [2024-04-15 16:12:10.542174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.256 [2024-04-15 16:12:10.542191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:70376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.256 [2024-04-15 16:12:10.542205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.256 [2024-04-15 16:12:10.542222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:70384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.256 [2024-04-15 16:12:10.542237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.256 [2024-04-15 16:12:10.542253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:70392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.256 [2024-04-15 16:12:10.542268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.256 [2024-04-15 16:12:10.542285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:70400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.256 [2024-04-15 16:12:10.542300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.256 [2024-04-15 16:12:10.542318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:70024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.256 [2024-04-15 16:12:10.542332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.256 [2024-04-15 16:12:10.542354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:70032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.256 [2024-04-15 16:12:10.542369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.256 [2024-04-15 16:12:10.542386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:70040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.256 [2024-04-15 16:12:10.542401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.256 [2024-04-15 16:12:10.542417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:70048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.256 [2024-04-15 16:12:10.542433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.256 [2024-04-15 16:12:10.542449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:70056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.256 [2024-04-15 16:12:10.542464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.256 [2024-04-15 16:12:10.542481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:70064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.256 [2024-04-15 16:12:10.542495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.256 [2024-04-15 16:12:10.542512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:70072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.256 [2024-04-15 16:12:10.542527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.256 [2024-04-15 16:12:10.542543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:70080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.256 [2024-04-15 16:12:10.542558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.256 [2024-04-15 16:12:10.542582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:70408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.256 [2024-04-15 16:12:10.542597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.256 [2024-04-15 16:12:10.542614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:70416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.256 [2024-04-15 16:12:10.542629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.256 [2024-04-15 16:12:10.542645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:70424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.256 [2024-04-15 16:12:10.542660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.256 [2024-04-15 16:12:10.542676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:70432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.256 [2024-04-15 16:12:10.542691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.256 [2024-04-15 16:12:10.542707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:70440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.256 [2024-04-15 16:12:10.542722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.256 [2024-04-15 16:12:10.542739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:70448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.256 [2024-04-15 16:12:10.542753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.256 [2024-04-15 16:12:10.542776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:70456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.256 [2024-04-15 16:12:10.542791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.256 [2024-04-15 16:12:10.542808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:70464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.256 [2024-04-15 16:12:10.542823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.256 [2024-04-15 16:12:10.542840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.256 [2024-04-15 16:12:10.542855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.256 [2024-04-15 16:12:10.542872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:70480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.256 [2024-04-15 16:12:10.542887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.256 [2024-04-15 16:12:10.542903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:70488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.256 [2024-04-15 16:12:10.542918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.256 [2024-04-15 16:12:10.542934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:70496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.256 [2024-04-15 16:12:10.542949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.256 [2024-04-15 16:12:10.542966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:70504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.256 [2024-04-15 16:12:10.542981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.257 [2024-04-15 16:12:10.542998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:70512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.257 [2024-04-15 16:12:10.543012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.257 [2024-04-15 16:12:10.543029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:70520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.257 [2024-04-15 16:12:10.543044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.257 [2024-04-15 16:12:10.543060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:70528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.257 [2024-04-15 16:12:10.543075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.257 [2024-04-15 16:12:10.543091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:70536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.257 [2024-04-15 16:12:10.543106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.257 [2024-04-15 16:12:10.543122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:70544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.257 [2024-04-15 16:12:10.543137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.257 [2024-04-15 16:12:10.543154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:70552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.257 [2024-04-15 16:12:10.543173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.257 [2024-04-15 16:12:10.543190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:70560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.257 [2024-04-15 16:12:10.543205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.257 [2024-04-15 16:12:10.543221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:70088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.257 [2024-04-15 16:12:10.543236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.257 [2024-04-15 16:12:10.543252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:70096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.257 [2024-04-15 16:12:10.543267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.257 [2024-04-15 16:12:10.543284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:70104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.257 [2024-04-15 16:12:10.543299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.257 [2024-04-15 16:12:10.543316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:70112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.257 [2024-04-15 16:12:10.543331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.257 [2024-04-15 16:12:10.543347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:70120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.257 [2024-04-15 16:12:10.543362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.257 [2024-04-15 16:12:10.543379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:70128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.257 [2024-04-15 16:12:10.543394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.257 [2024-04-15 16:12:10.543410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:70136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.257 [2024-04-15 16:12:10.543425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.257 [2024-04-15 16:12:10.543452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:70144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.257 [2024-04-15 16:12:10.543466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.257 [2024-04-15 16:12:10.543482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:70568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.257 [2024-04-15 16:12:10.543496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.257 [2024-04-15 16:12:10.543511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:70576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.257 [2024-04-15 16:12:10.543525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.257 [2024-04-15 16:12:10.543541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:70584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.257 [2024-04-15 16:12:10.543555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.257 [2024-04-15 16:12:10.543575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:70592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.257 [2024-04-15 16:12:10.543596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.257 [2024-04-15 16:12:10.543612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:70600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.257 [2024-04-15 16:12:10.543626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.257 [2024-04-15 16:12:10.543641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:70608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.257 [2024-04-15 16:12:10.543655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.257 [2024-04-15 16:12:10.543671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:70616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.257 [2024-04-15 16:12:10.543685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.257 [2024-04-15 16:12:10.543700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:70624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.257 [2024-04-15 16:12:10.543714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.257 [2024-04-15 16:12:10.543730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:70632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.257 [2024-04-15 16:12:10.543744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.257 [2024-04-15 16:12:10.543759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:70640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.257 [2024-04-15 16:12:10.543773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.257 [2024-04-15 16:12:10.543789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:70648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.257 [2024-04-15 16:12:10.543803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.257 [2024-04-15 16:12:10.543819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:70656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.257 [2024-04-15 16:12:10.543833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.257 [2024-04-15 16:12:10.543848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:70664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.257 [2024-04-15 16:12:10.543863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.257 [2024-04-15 16:12:10.543878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:70672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.257 [2024-04-15 16:12:10.543891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.257 [2024-04-15 16:12:10.543907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:70680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.257 [2024-04-15 16:12:10.543921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.257 [2024-04-15 16:12:10.543937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:70688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.257 [2024-04-15 16:12:10.543956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.257 [2024-04-15 16:12:10.543971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:70696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.257 [2024-04-15 16:12:10.543985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.257 [2024-04-15 16:12:10.544000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:70704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:47.257 [2024-04-15 16:12:10.544014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.257 [2024-04-15 16:12:10.544030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:70152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.257 [2024-04-15 16:12:10.544044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.257 [2024-04-15 16:12:10.544059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:70160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.257 [2024-04-15 16:12:10.544073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.257 [2024-04-15 16:12:10.544089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:70168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.257 [2024-04-15 16:12:10.544103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.257 [2024-04-15 16:12:10.544119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:70176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.257 [2024-04-15 16:12:10.544132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.257 [2024-04-15 16:12:10.544148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:70184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.257 [2024-04-15 16:12:10.544162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.257 [2024-04-15 16:12:10.544177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:70192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.257 [2024-04-15 16:12:10.544191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.257 [2024-04-15 16:12:10.544207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:70200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.257 [2024-04-15 16:12:10.544221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.257 [2024-04-15 16:12:10.544236] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98750 is same with the state(5) to be set 00:19:47.258 [2024-04-15 16:12:10.544253] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:47.258 [2024-04-15 16:12:10.544264] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:47.258 [2024-04-15 16:12:10.544275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:70208 len:8 PRP1 0x0 PRP2 0x0 00:19:47.258 [2024-04-15 16:12:10.544289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.258 [2024-04-15 16:12:10.544342] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa98750 was disconnected and freed. reset controller. 00:19:47.258 [2024-04-15 16:12:10.544359] bdev_nvme.c:1853:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:19:47.258 [2024-04-15 16:12:10.544407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:47.258 [2024-04-15 16:12:10.544431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.258 [2024-04-15 16:12:10.544446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:47.258 [2024-04-15 16:12:10.544460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.258 [2024-04-15 16:12:10.544475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:47.258 [2024-04-15 16:12:10.544489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.258 [2024-04-15 16:12:10.544504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:47.258 [2024-04-15 16:12:10.544520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.258 [2024-04-15 16:12:10.544534] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:47.258 [2024-04-15 16:12:10.544587] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8bd5f0 (9): Bad file descriptor 00:19:47.258 [2024-04-15 16:12:10.547770] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:47.258 [2024-04-15 16:12:10.576162] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:47.258 00:19:47.258 Latency(us) 00:19:47.258 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:47.258 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:47.258 Verification LBA range: start 0x0 length 0x4000 00:19:47.258 NVMe0n1 : 15.01 9390.08 36.68 244.97 0.00 13257.45 507.12 15915.89 00:19:47.258 =================================================================================================================== 00:19:47.258 Total : 9390.08 36.68 244.97 0.00 13257.45 507.12 15915.89 00:19:47.258 Received shutdown signal, test time was about 15.000000 seconds 00:19:47.258 00:19:47.258 Latency(us) 00:19:47.258 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:47.258 =================================================================================================================== 00:19:47.258 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:47.258 16:12:16 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:19:47.258 16:12:16 -- host/failover.sh@65 -- # count=3 00:19:47.258 16:12:16 -- host/failover.sh@67 -- # (( count != 3 )) 00:19:47.258 16:12:16 -- host/failover.sh@73 -- # bdevperf_pid=87867 00:19:47.258 16:12:16 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:19:47.258 16:12:16 -- host/failover.sh@75 -- # waitforlisten 87867 /var/tmp/bdevperf.sock 00:19:47.258 16:12:16 -- common/autotest_common.sh@817 -- # '[' -z 87867 ']' 00:19:47.258 16:12:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:47.258 16:12:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:47.258 16:12:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:47.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:47.258 16:12:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:47.258 16:12:16 -- common/autotest_common.sh@10 -- # set +x 00:19:47.258 16:12:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:47.258 16:12:16 -- common/autotest_common.sh@850 -- # return 0 00:19:47.258 16:12:16 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:47.258 [2024-04-15 16:12:17.122562] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:47.258 16:12:17 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:19:47.516 [2024-04-15 16:12:17.418890] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:19:47.516 16:12:17 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:48.082 NVMe0n1 00:19:48.082 16:12:17 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:48.388 00:19:48.388 16:12:18 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:48.660 00:19:48.660 16:12:18 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:48.660 16:12:18 -- host/failover.sh@82 -- # grep -q NVMe0 00:19:48.919 16:12:18 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:49.484 16:12:19 -- host/failover.sh@87 -- # sleep 3 00:19:52.769 16:12:22 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:52.769 16:12:22 -- host/failover.sh@88 -- # grep -q NVMe0 00:19:52.769 16:12:22 -- host/failover.sh@90 -- # run_test_pid=87936 00:19:52.769 16:12:22 -- host/failover.sh@92 -- # wait 87936 00:19:52.769 16:12:22 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:53.701 0 00:19:53.701 16:12:23 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:53.701 [2024-04-15 16:12:16.518826] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:19:53.701 [2024-04-15 16:12:16.518939] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87867 ] 00:19:53.702 [2024-04-15 16:12:16.664803] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.702 [2024-04-15 16:12:16.720078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:53.702 [2024-04-15 16:12:19.156158] bdev_nvme.c:1853:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:19:53.702 [2024-04-15 16:12:19.156276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:53.702 [2024-04-15 16:12:19.156299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.702 [2024-04-15 16:12:19.156318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:53.702 [2024-04-15 16:12:19.156333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.702 [2024-04-15 16:12:19.156349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:53.702 [2024-04-15 16:12:19.156363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.702 [2024-04-15 16:12:19.156379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:53.702 [2024-04-15 16:12:19.156394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.702 [2024-04-15 16:12:19.156410] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:53.702 [2024-04-15 16:12:19.156458] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:53.702 [2024-04-15 16:12:19.156483] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x119e5f0 (9): Bad file descriptor 00:19:53.702 [2024-04-15 16:12:19.165529] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:53.702 Running I/O for 1 seconds... 00:19:53.702 00:19:53.702 Latency(us) 00:19:53.702 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:53.702 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:53.702 Verification LBA range: start 0x0 length 0x4000 00:19:53.702 NVMe0n1 : 1.01 8165.26 31.90 0.00 0.00 15586.77 1849.05 17101.78 00:19:53.702 =================================================================================================================== 00:19:53.702 Total : 8165.26 31.90 0.00 0.00 15586.77 1849.05 17101.78 00:19:53.702 16:12:23 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:53.702 16:12:23 -- host/failover.sh@95 -- # grep -q NVMe0 00:19:54.278 16:12:23 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:54.546 16:12:24 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:54.546 16:12:24 -- host/failover.sh@99 -- # grep -q NVMe0 00:19:54.819 16:12:24 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:55.080 16:12:24 -- host/failover.sh@101 -- # sleep 3 00:19:58.509 16:12:27 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:58.509 16:12:27 -- host/failover.sh@103 -- # grep -q NVMe0 00:19:58.509 16:12:28 -- host/failover.sh@108 -- # killprocess 87867 00:19:58.509 16:12:28 -- common/autotest_common.sh@936 -- # '[' -z 87867 ']' 00:19:58.509 16:12:28 -- common/autotest_common.sh@940 -- # kill -0 87867 00:19:58.509 16:12:28 -- common/autotest_common.sh@941 -- # uname 00:19:58.509 16:12:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:58.509 16:12:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87867 00:19:58.509 16:12:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:58.509 16:12:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:58.509 16:12:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87867' 00:19:58.509 killing process with pid 87867 00:19:58.509 16:12:28 -- common/autotest_common.sh@955 -- # kill 87867 00:19:58.509 16:12:28 -- common/autotest_common.sh@960 -- # wait 87867 00:19:58.509 16:12:28 -- host/failover.sh@110 -- # sync 00:19:58.509 16:12:28 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:59.077 16:12:28 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:19:59.078 16:12:28 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:59.078 16:12:28 -- host/failover.sh@116 -- # nvmftestfini 00:19:59.078 16:12:28 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:59.078 16:12:28 -- nvmf/common.sh@117 -- # sync 00:19:59.078 16:12:28 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:59.078 16:12:28 -- nvmf/common.sh@120 -- # set +e 00:19:59.078 16:12:28 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:59.078 16:12:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:59.078 rmmod nvme_tcp 00:19:59.078 rmmod nvme_fabrics 00:19:59.078 16:12:28 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:59.078 16:12:28 -- nvmf/common.sh@124 -- # set -e 00:19:59.078 16:12:28 -- nvmf/common.sh@125 -- # return 0 00:19:59.078 16:12:28 -- nvmf/common.sh@478 -- # '[' -n 87608 ']' 00:19:59.078 16:12:28 -- nvmf/common.sh@479 -- # killprocess 87608 00:19:59.078 16:12:28 -- common/autotest_common.sh@936 -- # '[' -z 87608 ']' 00:19:59.078 16:12:28 -- common/autotest_common.sh@940 -- # kill -0 87608 00:19:59.078 16:12:28 -- common/autotest_common.sh@941 -- # uname 00:19:59.078 16:12:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:59.078 16:12:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87608 00:19:59.078 16:12:28 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:59.078 16:12:28 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:59.078 16:12:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87608' 00:19:59.078 killing process with pid 87608 00:19:59.078 16:12:28 -- common/autotest_common.sh@955 -- # kill 87608 00:19:59.078 16:12:28 -- common/autotest_common.sh@960 -- # wait 87608 00:19:59.337 16:12:29 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:59.337 16:12:29 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:59.337 16:12:29 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:59.337 16:12:29 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:59.337 16:12:29 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:59.337 16:12:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:59.337 16:12:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:59.337 16:12:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:59.337 16:12:29 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:59.337 ************************************ 00:19:59.337 END TEST nvmf_failover 00:19:59.337 ************************************ 00:19:59.337 00:19:59.337 real 0m32.698s 00:19:59.337 user 2m5.665s 00:19:59.337 sys 0m6.813s 00:19:59.337 16:12:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:59.337 16:12:29 -- common/autotest_common.sh@10 -- # set +x 00:19:59.337 16:12:29 -- nvmf/nvmf.sh@99 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:19:59.337 16:12:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:59.337 16:12:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:59.337 16:12:29 -- common/autotest_common.sh@10 -- # set +x 00:19:59.337 ************************************ 00:19:59.337 START TEST nvmf_discovery 00:19:59.337 ************************************ 00:19:59.337 16:12:29 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:19:59.597 * Looking for test storage... 00:19:59.597 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:59.597 16:12:29 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:59.597 16:12:29 -- nvmf/common.sh@7 -- # uname -s 00:19:59.597 16:12:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:59.597 16:12:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:59.597 16:12:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:59.597 16:12:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:59.597 16:12:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:59.597 16:12:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:59.597 16:12:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:59.597 16:12:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:59.597 16:12:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:59.597 16:12:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:59.597 16:12:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5cf30adc-e0ee-4255-9853-178d7d30983a 00:19:59.597 16:12:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=5cf30adc-e0ee-4255-9853-178d7d30983a 00:19:59.597 16:12:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:59.597 16:12:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:59.597 16:12:29 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:59.597 16:12:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:59.597 16:12:29 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:59.597 16:12:29 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:59.597 16:12:29 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:59.597 16:12:29 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:59.597 16:12:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.597 16:12:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.597 16:12:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.597 16:12:29 -- paths/export.sh@5 -- # export PATH 00:19:59.597 16:12:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.597 16:12:29 -- nvmf/common.sh@47 -- # : 0 00:19:59.597 16:12:29 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:59.597 16:12:29 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:59.597 16:12:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:59.597 16:12:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:59.597 16:12:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:59.597 16:12:29 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:59.597 16:12:29 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:59.597 16:12:29 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:59.597 16:12:29 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:19:59.597 16:12:29 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:19:59.597 16:12:29 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:19:59.597 16:12:29 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:19:59.597 16:12:29 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:19:59.597 16:12:29 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:19:59.597 16:12:29 -- host/discovery.sh@25 -- # nvmftestinit 00:19:59.597 16:12:29 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:59.597 16:12:29 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:59.597 16:12:29 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:59.597 16:12:29 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:59.597 16:12:29 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:59.597 16:12:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:59.597 16:12:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:59.597 16:12:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:59.597 16:12:29 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:19:59.597 16:12:29 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:19:59.597 16:12:29 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:19:59.597 16:12:29 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:19:59.597 16:12:29 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:19:59.597 16:12:29 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:19:59.597 16:12:29 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:59.597 16:12:29 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:59.597 16:12:29 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:59.597 16:12:29 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:59.598 16:12:29 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:59.598 16:12:29 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:59.598 16:12:29 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:59.598 16:12:29 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:59.598 16:12:29 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:59.598 16:12:29 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:59.598 16:12:29 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:59.598 16:12:29 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:59.598 16:12:29 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:59.598 16:12:29 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:59.598 Cannot find device "nvmf_tgt_br" 00:19:59.598 16:12:29 -- nvmf/common.sh@155 -- # true 00:19:59.598 16:12:29 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:59.598 Cannot find device "nvmf_tgt_br2" 00:19:59.598 16:12:29 -- nvmf/common.sh@156 -- # true 00:19:59.598 16:12:29 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:59.598 16:12:29 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:59.598 Cannot find device "nvmf_tgt_br" 00:19:59.598 16:12:29 -- nvmf/common.sh@158 -- # true 00:19:59.598 16:12:29 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:59.598 Cannot find device "nvmf_tgt_br2" 00:19:59.598 16:12:29 -- nvmf/common.sh@159 -- # true 00:19:59.598 16:12:29 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:59.598 16:12:29 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:59.857 16:12:29 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:59.857 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:59.857 16:12:29 -- nvmf/common.sh@162 -- # true 00:19:59.857 16:12:29 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:59.857 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:59.857 16:12:29 -- nvmf/common.sh@163 -- # true 00:19:59.857 16:12:29 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:59.857 16:12:29 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:59.857 16:12:29 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:59.857 16:12:29 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:59.857 16:12:29 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:59.857 16:12:29 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:59.857 16:12:29 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:59.857 16:12:29 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:59.857 16:12:29 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:59.857 16:12:29 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:59.857 16:12:29 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:59.857 16:12:29 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:59.857 16:12:29 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:59.857 16:12:29 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:59.857 16:12:29 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:59.857 16:12:29 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:59.857 16:12:29 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:59.857 16:12:29 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:59.857 16:12:29 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:59.857 16:12:29 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:59.857 16:12:29 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:59.857 16:12:29 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:59.857 16:12:29 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:59.857 16:12:29 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:59.857 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:59.857 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:19:59.857 00:19:59.857 --- 10.0.0.2 ping statistics --- 00:19:59.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.857 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:19:59.857 16:12:29 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:00.116 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:00.116 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:20:00.116 00:20:00.116 --- 10.0.0.3 ping statistics --- 00:20:00.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.116 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:20:00.116 16:12:29 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:00.116 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:00.116 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:20:00.116 00:20:00.116 --- 10.0.0.1 ping statistics --- 00:20:00.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.116 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:20:00.116 16:12:29 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:00.116 16:12:29 -- nvmf/common.sh@422 -- # return 0 00:20:00.116 16:12:29 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:00.116 16:12:29 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:00.116 16:12:29 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:00.116 16:12:29 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:00.116 16:12:29 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:00.116 16:12:29 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:00.116 16:12:29 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:00.116 16:12:29 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:20:00.116 16:12:29 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:00.116 16:12:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:00.116 16:12:29 -- common/autotest_common.sh@10 -- # set +x 00:20:00.116 16:12:29 -- nvmf/common.sh@470 -- # nvmfpid=88219 00:20:00.117 16:12:29 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:00.117 16:12:29 -- nvmf/common.sh@471 -- # waitforlisten 88219 00:20:00.117 16:12:29 -- common/autotest_common.sh@817 -- # '[' -z 88219 ']' 00:20:00.117 16:12:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.117 16:12:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:00.117 16:12:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.117 16:12:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:00.117 16:12:29 -- common/autotest_common.sh@10 -- # set +x 00:20:00.117 [2024-04-15 16:12:29.915262] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:20:00.117 [2024-04-15 16:12:29.915625] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:00.117 [2024-04-15 16:12:30.062659] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.380 [2024-04-15 16:12:30.118850] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:00.380 [2024-04-15 16:12:30.119153] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:00.380 [2024-04-15 16:12:30.119337] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:00.380 [2024-04-15 16:12:30.119648] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:00.380 [2024-04-15 16:12:30.119711] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:00.380 [2024-04-15 16:12:30.119873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:00.380 16:12:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:00.380 16:12:30 -- common/autotest_common.sh@850 -- # return 0 00:20:00.380 16:12:30 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:00.380 16:12:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:00.380 16:12:30 -- common/autotest_common.sh@10 -- # set +x 00:20:00.380 16:12:30 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:00.380 16:12:30 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:00.380 16:12:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.380 16:12:30 -- common/autotest_common.sh@10 -- # set +x 00:20:00.380 [2024-04-15 16:12:30.273175] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:00.380 16:12:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.380 16:12:30 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:20:00.380 16:12:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.380 16:12:30 -- common/autotest_common.sh@10 -- # set +x 00:20:00.380 [2024-04-15 16:12:30.281450] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:20:00.380 16:12:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.380 16:12:30 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:20:00.380 16:12:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.380 16:12:30 -- common/autotest_common.sh@10 -- # set +x 00:20:00.380 null0 00:20:00.380 16:12:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.380 16:12:30 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:20:00.380 16:12:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.380 16:12:30 -- common/autotest_common.sh@10 -- # set +x 00:20:00.380 null1 00:20:00.380 16:12:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.380 16:12:30 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:20:00.380 16:12:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.380 16:12:30 -- common/autotest_common.sh@10 -- # set +x 00:20:00.380 16:12:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.380 16:12:30 -- host/discovery.sh@45 -- # hostpid=88239 00:20:00.380 16:12:30 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:20:00.380 16:12:30 -- host/discovery.sh@46 -- # waitforlisten 88239 /tmp/host.sock 00:20:00.380 16:12:30 -- common/autotest_common.sh@817 -- # '[' -z 88239 ']' 00:20:00.380 16:12:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:20:00.380 16:12:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:00.380 16:12:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:20:00.380 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:20:00.380 16:12:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:00.380 16:12:30 -- common/autotest_common.sh@10 -- # set +x 00:20:00.639 [2024-04-15 16:12:30.353668] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:20:00.639 [2024-04-15 16:12:30.353978] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88239 ] 00:20:00.639 [2024-04-15 16:12:30.493712] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.639 [2024-04-15 16:12:30.550062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.898 16:12:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:00.898 16:12:30 -- common/autotest_common.sh@850 -- # return 0 00:20:00.898 16:12:30 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:00.898 16:12:30 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:20:00.898 16:12:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.898 16:12:30 -- common/autotest_common.sh@10 -- # set +x 00:20:00.898 16:12:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.898 16:12:30 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:20:00.898 16:12:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.898 16:12:30 -- common/autotest_common.sh@10 -- # set +x 00:20:00.898 16:12:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.898 16:12:30 -- host/discovery.sh@72 -- # notify_id=0 00:20:00.898 16:12:30 -- host/discovery.sh@83 -- # get_subsystem_names 00:20:00.898 16:12:30 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:00.898 16:12:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.898 16:12:30 -- host/discovery.sh@59 -- # sort 00:20:00.898 16:12:30 -- common/autotest_common.sh@10 -- # set +x 00:20:00.898 16:12:30 -- host/discovery.sh@59 -- # xargs 00:20:00.898 16:12:30 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:00.898 16:12:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.898 16:12:30 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:20:00.898 16:12:30 -- host/discovery.sh@84 -- # get_bdev_list 00:20:00.898 16:12:30 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:00.898 16:12:30 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:00.898 16:12:30 -- host/discovery.sh@55 -- # sort 00:20:00.898 16:12:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.898 16:12:30 -- host/discovery.sh@55 -- # xargs 00:20:00.898 16:12:30 -- common/autotest_common.sh@10 -- # set +x 00:20:00.898 16:12:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.898 16:12:30 -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:20:00.898 16:12:30 -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:20:00.898 16:12:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.898 16:12:30 -- common/autotest_common.sh@10 -- # set +x 00:20:00.898 16:12:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.898 16:12:30 -- host/discovery.sh@87 -- # get_subsystem_names 00:20:00.898 16:12:30 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:00.898 16:12:30 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:00.898 16:12:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.898 16:12:30 -- host/discovery.sh@59 -- # xargs 00:20:00.898 16:12:30 -- common/autotest_common.sh@10 -- # set +x 00:20:00.898 16:12:30 -- host/discovery.sh@59 -- # sort 00:20:00.898 16:12:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.898 16:12:30 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:20:00.898 16:12:30 -- host/discovery.sh@88 -- # get_bdev_list 00:20:01.158 16:12:30 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:01.158 16:12:30 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:01.158 16:12:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.158 16:12:30 -- common/autotest_common.sh@10 -- # set +x 00:20:01.158 16:12:30 -- host/discovery.sh@55 -- # sort 00:20:01.158 16:12:30 -- host/discovery.sh@55 -- # xargs 00:20:01.158 16:12:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.158 16:12:30 -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:20:01.158 16:12:30 -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:20:01.158 16:12:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.158 16:12:30 -- common/autotest_common.sh@10 -- # set +x 00:20:01.158 16:12:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.158 16:12:30 -- host/discovery.sh@91 -- # get_subsystem_names 00:20:01.158 16:12:30 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:01.158 16:12:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.158 16:12:30 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:01.158 16:12:30 -- host/discovery.sh@59 -- # sort 00:20:01.158 16:12:30 -- common/autotest_common.sh@10 -- # set +x 00:20:01.158 16:12:30 -- host/discovery.sh@59 -- # xargs 00:20:01.158 16:12:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.158 16:12:30 -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:20:01.158 16:12:30 -- host/discovery.sh@92 -- # get_bdev_list 00:20:01.158 16:12:30 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:01.158 16:12:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.158 16:12:30 -- common/autotest_common.sh@10 -- # set +x 00:20:01.158 16:12:30 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:01.158 16:12:30 -- host/discovery.sh@55 -- # xargs 00:20:01.158 16:12:30 -- host/discovery.sh@55 -- # sort 00:20:01.158 16:12:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.158 16:12:31 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:20:01.158 16:12:31 -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:01.158 16:12:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.158 16:12:31 -- common/autotest_common.sh@10 -- # set +x 00:20:01.158 [2024-04-15 16:12:31.031414] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:01.158 16:12:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.158 16:12:31 -- host/discovery.sh@97 -- # get_subsystem_names 00:20:01.158 16:12:31 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:01.158 16:12:31 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:01.158 16:12:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.158 16:12:31 -- common/autotest_common.sh@10 -- # set +x 00:20:01.158 16:12:31 -- host/discovery.sh@59 -- # sort 00:20:01.158 16:12:31 -- host/discovery.sh@59 -- # xargs 00:20:01.158 16:12:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.158 16:12:31 -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:20:01.158 16:12:31 -- host/discovery.sh@98 -- # get_bdev_list 00:20:01.158 16:12:31 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:01.158 16:12:31 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:01.158 16:12:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.158 16:12:31 -- host/discovery.sh@55 -- # sort 00:20:01.158 16:12:31 -- common/autotest_common.sh@10 -- # set +x 00:20:01.158 16:12:31 -- host/discovery.sh@55 -- # xargs 00:20:01.158 16:12:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.417 16:12:31 -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:20:01.417 16:12:31 -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:20:01.417 16:12:31 -- host/discovery.sh@79 -- # expected_count=0 00:20:01.417 16:12:31 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:01.417 16:12:31 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:01.417 16:12:31 -- common/autotest_common.sh@901 -- # local max=10 00:20:01.417 16:12:31 -- common/autotest_common.sh@902 -- # (( max-- )) 00:20:01.417 16:12:31 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:01.417 16:12:31 -- common/autotest_common.sh@903 -- # get_notification_count 00:20:01.417 16:12:31 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:20:01.417 16:12:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.417 16:12:31 -- host/discovery.sh@74 -- # jq '. | length' 00:20:01.417 16:12:31 -- common/autotest_common.sh@10 -- # set +x 00:20:01.417 16:12:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.417 16:12:31 -- host/discovery.sh@74 -- # notification_count=0 00:20:01.418 16:12:31 -- host/discovery.sh@75 -- # notify_id=0 00:20:01.418 16:12:31 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:20:01.418 16:12:31 -- common/autotest_common.sh@904 -- # return 0 00:20:01.418 16:12:31 -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:20:01.418 16:12:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.418 16:12:31 -- common/autotest_common.sh@10 -- # set +x 00:20:01.418 16:12:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.418 16:12:31 -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:01.418 16:12:31 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:01.418 16:12:31 -- common/autotest_common.sh@901 -- # local max=10 00:20:01.418 16:12:31 -- common/autotest_common.sh@902 -- # (( max-- )) 00:20:01.418 16:12:31 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:20:01.418 16:12:31 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:20:01.418 16:12:31 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:01.418 16:12:31 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:01.418 16:12:31 -- host/discovery.sh@59 -- # sort 00:20:01.418 16:12:31 -- host/discovery.sh@59 -- # xargs 00:20:01.418 16:12:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.418 16:12:31 -- common/autotest_common.sh@10 -- # set +x 00:20:01.418 16:12:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.418 16:12:31 -- common/autotest_common.sh@903 -- # [[ '' == \n\v\m\e\0 ]] 00:20:01.418 16:12:31 -- common/autotest_common.sh@906 -- # sleep 1 00:20:01.984 [2024-04-15 16:12:31.690820] bdev_nvme.c:6898:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:20:01.984 [2024-04-15 16:12:31.691042] bdev_nvme.c:6978:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:20:01.984 [2024-04-15 16:12:31.691107] bdev_nvme.c:6861:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:01.984 [2024-04-15 16:12:31.696870] bdev_nvme.c:6827:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:20:01.984 [2024-04-15 16:12:31.753391] bdev_nvme.c:6717:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:20:01.984 [2024-04-15 16:12:31.753810] bdev_nvme.c:6676:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:20:02.550 16:12:32 -- common/autotest_common.sh@902 -- # (( max-- )) 00:20:02.550 16:12:32 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:20:02.550 16:12:32 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:20:02.550 16:12:32 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:02.550 16:12:32 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:02.550 16:12:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.550 16:12:32 -- common/autotest_common.sh@10 -- # set +x 00:20:02.550 16:12:32 -- host/discovery.sh@59 -- # sort 00:20:02.550 16:12:32 -- host/discovery.sh@59 -- # xargs 00:20:02.550 16:12:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.550 16:12:32 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.550 16:12:32 -- common/autotest_common.sh@904 -- # return 0 00:20:02.550 16:12:32 -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:20:02.550 16:12:32 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:20:02.550 16:12:32 -- common/autotest_common.sh@901 -- # local max=10 00:20:02.550 16:12:32 -- common/autotest_common.sh@902 -- # (( max-- )) 00:20:02.550 16:12:32 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:20:02.550 16:12:32 -- common/autotest_common.sh@903 -- # get_bdev_list 00:20:02.550 16:12:32 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:02.550 16:12:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.551 16:12:32 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:02.551 16:12:32 -- common/autotest_common.sh@10 -- # set +x 00:20:02.551 16:12:32 -- host/discovery.sh@55 -- # xargs 00:20:02.551 16:12:32 -- host/discovery.sh@55 -- # sort 00:20:02.551 16:12:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.551 16:12:32 -- common/autotest_common.sh@903 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:20:02.551 16:12:32 -- common/autotest_common.sh@904 -- # return 0 00:20:02.551 16:12:32 -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:20:02.551 16:12:32 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:20:02.551 16:12:32 -- common/autotest_common.sh@901 -- # local max=10 00:20:02.551 16:12:32 -- common/autotest_common.sh@902 -- # (( max-- )) 00:20:02.551 16:12:32 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:20:02.551 16:12:32 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:20:02.551 16:12:32 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:20:02.551 16:12:32 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:02.551 16:12:32 -- host/discovery.sh@63 -- # sort -n 00:20:02.551 16:12:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.551 16:12:32 -- host/discovery.sh@63 -- # xargs 00:20:02.551 16:12:32 -- common/autotest_common.sh@10 -- # set +x 00:20:02.551 16:12:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.551 16:12:32 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0 ]] 00:20:02.551 16:12:32 -- common/autotest_common.sh@904 -- # return 0 00:20:02.551 16:12:32 -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:20:02.551 16:12:32 -- host/discovery.sh@79 -- # expected_count=1 00:20:02.551 16:12:32 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:02.551 16:12:32 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:02.551 16:12:32 -- common/autotest_common.sh@901 -- # local max=10 00:20:02.551 16:12:32 -- common/autotest_common.sh@902 -- # (( max-- )) 00:20:02.551 16:12:32 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:02.551 16:12:32 -- common/autotest_common.sh@903 -- # get_notification_count 00:20:02.551 16:12:32 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:20:02.551 16:12:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.551 16:12:32 -- host/discovery.sh@74 -- # jq '. | length' 00:20:02.551 16:12:32 -- common/autotest_common.sh@10 -- # set +x 00:20:02.551 16:12:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.551 16:12:32 -- host/discovery.sh@74 -- # notification_count=1 00:20:02.551 16:12:32 -- host/discovery.sh@75 -- # notify_id=1 00:20:02.551 16:12:32 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:20:02.551 16:12:32 -- common/autotest_common.sh@904 -- # return 0 00:20:02.551 16:12:32 -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:20:02.551 16:12:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.551 16:12:32 -- common/autotest_common.sh@10 -- # set +x 00:20:02.551 16:12:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.551 16:12:32 -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:02.551 16:12:32 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:02.551 16:12:32 -- common/autotest_common.sh@901 -- # local max=10 00:20:02.551 16:12:32 -- common/autotest_common.sh@902 -- # (( max-- )) 00:20:02.551 16:12:32 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:20:02.551 16:12:32 -- common/autotest_common.sh@903 -- # get_bdev_list 00:20:02.551 16:12:32 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:02.551 16:12:32 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:02.551 16:12:32 -- host/discovery.sh@55 -- # sort 00:20:02.551 16:12:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.551 16:12:32 -- common/autotest_common.sh@10 -- # set +x 00:20:02.551 16:12:32 -- host/discovery.sh@55 -- # xargs 00:20:02.809 16:12:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.809 16:12:32 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:02.809 16:12:32 -- common/autotest_common.sh@904 -- # return 0 00:20:02.809 16:12:32 -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:20:02.809 16:12:32 -- host/discovery.sh@79 -- # expected_count=1 00:20:02.809 16:12:32 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:02.809 16:12:32 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:02.809 16:12:32 -- common/autotest_common.sh@901 -- # local max=10 00:20:02.809 16:12:32 -- common/autotest_common.sh@902 -- # (( max-- )) 00:20:02.809 16:12:32 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:02.809 16:12:32 -- common/autotest_common.sh@903 -- # get_notification_count 00:20:02.809 16:12:32 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:20:02.809 16:12:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.809 16:12:32 -- host/discovery.sh@74 -- # jq '. | length' 00:20:02.809 16:12:32 -- common/autotest_common.sh@10 -- # set +x 00:20:02.809 16:12:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.809 16:12:32 -- host/discovery.sh@74 -- # notification_count=1 00:20:02.809 16:12:32 -- host/discovery.sh@75 -- # notify_id=2 00:20:02.809 16:12:32 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:20:02.809 16:12:32 -- common/autotest_common.sh@904 -- # return 0 00:20:02.809 16:12:32 -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:20:02.809 16:12:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.809 16:12:32 -- common/autotest_common.sh@10 -- # set +x 00:20:02.809 [2024-04-15 16:12:32.619428] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:02.809 [2024-04-15 16:12:32.621929] bdev_nvme.c:6880:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:20:02.809 [2024-04-15 16:12:32.622156] bdev_nvme.c:6861:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:02.809 16:12:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.809 16:12:32 -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:02.809 16:12:32 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:02.810 16:12:32 -- common/autotest_common.sh@901 -- # local max=10 00:20:02.810 16:12:32 -- common/autotest_common.sh@902 -- # (( max-- )) 00:20:02.810 16:12:32 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:20:02.810 16:12:32 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:20:02.810 [2024-04-15 16:12:32.627896] bdev_nvme.c:6822:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:20:02.810 16:12:32 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:02.810 16:12:32 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:02.810 16:12:32 -- host/discovery.sh@59 -- # sort 00:20:02.810 16:12:32 -- host/discovery.sh@59 -- # xargs 00:20:02.810 16:12:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.810 16:12:32 -- common/autotest_common.sh@10 -- # set +x 00:20:02.810 16:12:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.810 [2024-04-15 16:12:32.685325] bdev_nvme.c:6717:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:20:02.810 [2024-04-15 16:12:32.685590] bdev_nvme.c:6676:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:20:02.810 [2024-04-15 16:12:32.685697] bdev_nvme.c:6676:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:20:02.810 16:12:32 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.810 16:12:32 -- common/autotest_common.sh@904 -- # return 0 00:20:02.810 16:12:32 -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:02.810 16:12:32 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:02.810 16:12:32 -- common/autotest_common.sh@901 -- # local max=10 00:20:02.810 16:12:32 -- common/autotest_common.sh@902 -- # (( max-- )) 00:20:02.810 16:12:32 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:20:02.810 16:12:32 -- common/autotest_common.sh@903 -- # get_bdev_list 00:20:02.810 16:12:32 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:02.810 16:12:32 -- host/discovery.sh@55 -- # sort 00:20:02.810 16:12:32 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:02.810 16:12:32 -- host/discovery.sh@55 -- # xargs 00:20:02.810 16:12:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.810 16:12:32 -- common/autotest_common.sh@10 -- # set +x 00:20:02.810 16:12:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.810 16:12:32 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:02.810 16:12:32 -- common/autotest_common.sh@904 -- # return 0 00:20:02.810 16:12:32 -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:20:02.810 16:12:32 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:20:02.810 16:12:32 -- common/autotest_common.sh@901 -- # local max=10 00:20:02.810 16:12:32 -- common/autotest_common.sh@902 -- # (( max-- )) 00:20:02.810 16:12:32 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:20:02.810 16:12:32 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:20:02.810 16:12:32 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:20:02.810 16:12:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.810 16:12:32 -- common/autotest_common.sh@10 -- # set +x 00:20:02.810 16:12:32 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:02.810 16:12:32 -- host/discovery.sh@63 -- # sort -n 00:20:02.810 16:12:32 -- host/discovery.sh@63 -- # xargs 00:20:02.810 16:12:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:03.085 16:12:32 -- common/autotest_common.sh@903 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:20:03.085 16:12:32 -- common/autotest_common.sh@904 -- # return 0 00:20:03.085 16:12:32 -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:20:03.085 16:12:32 -- host/discovery.sh@79 -- # expected_count=0 00:20:03.085 16:12:32 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:03.085 16:12:32 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:03.085 16:12:32 -- common/autotest_common.sh@901 -- # local max=10 00:20:03.085 16:12:32 -- common/autotest_common.sh@902 -- # (( max-- )) 00:20:03.085 16:12:32 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:03.085 16:12:32 -- common/autotest_common.sh@903 -- # get_notification_count 00:20:03.085 16:12:32 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:20:03.085 16:12:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:03.085 16:12:32 -- common/autotest_common.sh@10 -- # set +x 00:20:03.085 16:12:32 -- host/discovery.sh@74 -- # jq '. | length' 00:20:03.085 16:12:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:03.085 16:12:32 -- host/discovery.sh@74 -- # notification_count=0 00:20:03.085 16:12:32 -- host/discovery.sh@75 -- # notify_id=2 00:20:03.085 16:12:32 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:20:03.085 16:12:32 -- common/autotest_common.sh@904 -- # return 0 00:20:03.085 16:12:32 -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:03.085 16:12:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:03.085 16:12:32 -- common/autotest_common.sh@10 -- # set +x 00:20:03.085 [2024-04-15 16:12:32.877340] bdev_nvme.c:6880:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:20:03.085 [2024-04-15 16:12:32.877519] bdev_nvme.c:6861:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:03.085 [2024-04-15 16:12:32.880469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.085 [2024-04-15 16:12:32.880650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.085 [2024-04-15 16:12:32.880835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.085 [2024-04-15 16:12:32.880947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.085 [2024-04-15 16:12:32.881047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.085 [2024-04-15 16:12:32.881178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.085 [2024-04-15 16:12:32.881289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.085 [2024-04-15 16:12:32.881362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.085 [2024-04-15 16:12:32.881422] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d8c10 is same with the state(5) to be set 00:20:03.085 16:12:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:03.085 16:12:32 -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:03.085 16:12:32 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:03.085 16:12:32 -- common/autotest_common.sh@901 -- # local max=10 00:20:03.085 16:12:32 -- common/autotest_common.sh@902 -- # (( max-- )) 00:20:03.085 16:12:32 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:20:03.085 [2024-04-15 16:12:32.883616] bdev_nvme.c:6685:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:20:03.085 [2024-04-15 16:12:32.883645] bdev_nvme.c:6676:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:20:03.085 [2024-04-15 16:12:32.883707] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d8c10 (9): Bad file descriptor 00:20:03.085 16:12:32 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:20:03.085 16:12:32 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:03.085 16:12:32 -- host/discovery.sh@59 -- # sort 00:20:03.085 16:12:32 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:03.085 16:12:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:03.085 16:12:32 -- host/discovery.sh@59 -- # xargs 00:20:03.085 16:12:32 -- common/autotest_common.sh@10 -- # set +x 00:20:03.085 16:12:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:03.085 16:12:32 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.085 16:12:32 -- common/autotest_common.sh@904 -- # return 0 00:20:03.085 16:12:32 -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:03.085 16:12:32 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:03.085 16:12:32 -- common/autotest_common.sh@901 -- # local max=10 00:20:03.085 16:12:32 -- common/autotest_common.sh@902 -- # (( max-- )) 00:20:03.085 16:12:32 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:20:03.085 16:12:32 -- common/autotest_common.sh@903 -- # get_bdev_list 00:20:03.085 16:12:32 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:03.085 16:12:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:03.085 16:12:32 -- common/autotest_common.sh@10 -- # set +x 00:20:03.085 16:12:32 -- host/discovery.sh@55 -- # xargs 00:20:03.085 16:12:32 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:03.085 16:12:32 -- host/discovery.sh@55 -- # sort 00:20:03.085 16:12:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:03.085 16:12:33 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:03.085 16:12:33 -- common/autotest_common.sh@904 -- # return 0 00:20:03.085 16:12:33 -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:20:03.085 16:12:33 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:20:03.085 16:12:33 -- common/autotest_common.sh@901 -- # local max=10 00:20:03.085 16:12:33 -- common/autotest_common.sh@902 -- # (( max-- )) 00:20:03.085 16:12:33 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:20:03.085 16:12:33 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:20:03.085 16:12:33 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:03.085 16:12:33 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:20:03.085 16:12:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:03.085 16:12:33 -- common/autotest_common.sh@10 -- # set +x 00:20:03.085 16:12:33 -- host/discovery.sh@63 -- # sort -n 00:20:03.085 16:12:33 -- host/discovery.sh@63 -- # xargs 00:20:03.085 16:12:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:03.347 16:12:33 -- common/autotest_common.sh@903 -- # [[ 4421 == \4\4\2\1 ]] 00:20:03.347 16:12:33 -- common/autotest_common.sh@904 -- # return 0 00:20:03.347 16:12:33 -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:20:03.347 16:12:33 -- host/discovery.sh@79 -- # expected_count=0 00:20:03.347 16:12:33 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:03.347 16:12:33 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:03.347 16:12:33 -- common/autotest_common.sh@901 -- # local max=10 00:20:03.347 16:12:33 -- common/autotest_common.sh@902 -- # (( max-- )) 00:20:03.347 16:12:33 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:03.347 16:12:33 -- common/autotest_common.sh@903 -- # get_notification_count 00:20:03.347 16:12:33 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:20:03.347 16:12:33 -- host/discovery.sh@74 -- # jq '. | length' 00:20:03.347 16:12:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:03.347 16:12:33 -- common/autotest_common.sh@10 -- # set +x 00:20:03.347 16:12:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:03.347 16:12:33 -- host/discovery.sh@74 -- # notification_count=0 00:20:03.347 16:12:33 -- host/discovery.sh@75 -- # notify_id=2 00:20:03.347 16:12:33 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:20:03.347 16:12:33 -- common/autotest_common.sh@904 -- # return 0 00:20:03.347 16:12:33 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:20:03.347 16:12:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:03.347 16:12:33 -- common/autotest_common.sh@10 -- # set +x 00:20:03.347 16:12:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:03.347 16:12:33 -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:20:03.347 16:12:33 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:20:03.347 16:12:33 -- common/autotest_common.sh@901 -- # local max=10 00:20:03.347 16:12:33 -- common/autotest_common.sh@902 -- # (( max-- )) 00:20:03.347 16:12:33 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:20:03.347 16:12:33 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:20:03.347 16:12:33 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:03.347 16:12:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:03.347 16:12:33 -- common/autotest_common.sh@10 -- # set +x 00:20:03.347 16:12:33 -- host/discovery.sh@59 -- # sort 00:20:03.347 16:12:33 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:03.347 16:12:33 -- host/discovery.sh@59 -- # xargs 00:20:03.347 16:12:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:03.347 16:12:33 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:20:03.347 16:12:33 -- common/autotest_common.sh@904 -- # return 0 00:20:03.347 16:12:33 -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:20:03.347 16:12:33 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:20:03.347 16:12:33 -- common/autotest_common.sh@901 -- # local max=10 00:20:03.347 16:12:33 -- common/autotest_common.sh@902 -- # (( max-- )) 00:20:03.347 16:12:33 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:20:03.347 16:12:33 -- common/autotest_common.sh@903 -- # get_bdev_list 00:20:03.347 16:12:33 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:03.347 16:12:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:03.347 16:12:33 -- common/autotest_common.sh@10 -- # set +x 00:20:03.347 16:12:33 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:03.347 16:12:33 -- host/discovery.sh@55 -- # sort 00:20:03.347 16:12:33 -- host/discovery.sh@55 -- # xargs 00:20:03.347 16:12:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:03.347 16:12:33 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:20:03.347 16:12:33 -- common/autotest_common.sh@904 -- # return 0 00:20:03.347 16:12:33 -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:20:03.347 16:12:33 -- host/discovery.sh@79 -- # expected_count=2 00:20:03.347 16:12:33 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:03.347 16:12:33 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:03.348 16:12:33 -- common/autotest_common.sh@901 -- # local max=10 00:20:03.348 16:12:33 -- common/autotest_common.sh@902 -- # (( max-- )) 00:20:03.348 16:12:33 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:03.348 16:12:33 -- common/autotest_common.sh@903 -- # get_notification_count 00:20:03.348 16:12:33 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:20:03.348 16:12:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:03.348 16:12:33 -- common/autotest_common.sh@10 -- # set +x 00:20:03.348 16:12:33 -- host/discovery.sh@74 -- # jq '. | length' 00:20:03.348 16:12:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:03.348 16:12:33 -- host/discovery.sh@74 -- # notification_count=2 00:20:03.348 16:12:33 -- host/discovery.sh@75 -- # notify_id=4 00:20:03.348 16:12:33 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:20:03.348 16:12:33 -- common/autotest_common.sh@904 -- # return 0 00:20:03.348 16:12:33 -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:03.348 16:12:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:03.348 16:12:33 -- common/autotest_common.sh@10 -- # set +x 00:20:04.720 [2024-04-15 16:12:34.281555] bdev_nvme.c:6898:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:20:04.720 [2024-04-15 16:12:34.281606] bdev_nvme.c:6978:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:20:04.720 [2024-04-15 16:12:34.281626] bdev_nvme.c:6861:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:04.720 [2024-04-15 16:12:34.287611] bdev_nvme.c:6827:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:20:04.720 [2024-04-15 16:12:34.347154] bdev_nvme.c:6717:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:20:04.720 [2024-04-15 16:12:34.347219] bdev_nvme.c:6676:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:20:04.720 16:12:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.720 16:12:34 -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:04.720 16:12:34 -- common/autotest_common.sh@638 -- # local es=0 00:20:04.720 16:12:34 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:04.720 16:12:34 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:20:04.720 16:12:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:04.720 16:12:34 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:20:04.720 16:12:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:04.720 16:12:34 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:04.720 16:12:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.721 16:12:34 -- common/autotest_common.sh@10 -- # set +x 00:20:04.721 request: 00:20:04.721 { 00:20:04.721 "name": "nvme", 00:20:04.721 "trtype": "tcp", 00:20:04.721 "traddr": "10.0.0.2", 00:20:04.721 "hostnqn": "nqn.2021-12.io.spdk:test", 00:20:04.721 "adrfam": "ipv4", 00:20:04.721 "trsvcid": "8009", 00:20:04.721 "wait_for_attach": true, 00:20:04.721 "method": "bdev_nvme_start_discovery", 00:20:04.721 "req_id": 1 00:20:04.721 } 00:20:04.721 Got JSON-RPC error response 00:20:04.721 response: 00:20:04.721 { 00:20:04.721 "code": -17, 00:20:04.721 "message": "File exists" 00:20:04.721 } 00:20:04.721 16:12:34 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:20:04.721 16:12:34 -- common/autotest_common.sh@641 -- # es=1 00:20:04.721 16:12:34 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:04.721 16:12:34 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:04.721 16:12:34 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:04.721 16:12:34 -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:20:04.721 16:12:34 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:04.721 16:12:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.721 16:12:34 -- common/autotest_common.sh@10 -- # set +x 00:20:04.721 16:12:34 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:20:04.721 16:12:34 -- host/discovery.sh@67 -- # sort 00:20:04.721 16:12:34 -- host/discovery.sh@67 -- # xargs 00:20:04.721 16:12:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.721 16:12:34 -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:20:04.721 16:12:34 -- host/discovery.sh@146 -- # get_bdev_list 00:20:04.721 16:12:34 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:04.721 16:12:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.721 16:12:34 -- common/autotest_common.sh@10 -- # set +x 00:20:04.721 16:12:34 -- host/discovery.sh@55 -- # sort 00:20:04.721 16:12:34 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:04.721 16:12:34 -- host/discovery.sh@55 -- # xargs 00:20:04.721 16:12:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.721 16:12:34 -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:04.721 16:12:34 -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:04.721 16:12:34 -- common/autotest_common.sh@638 -- # local es=0 00:20:04.721 16:12:34 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:04.721 16:12:34 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:20:04.721 16:12:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:04.721 16:12:34 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:20:04.721 16:12:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:04.721 16:12:34 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:04.721 16:12:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.721 16:12:34 -- common/autotest_common.sh@10 -- # set +x 00:20:04.721 request: 00:20:04.721 { 00:20:04.721 "name": "nvme_second", 00:20:04.721 "trtype": "tcp", 00:20:04.721 "traddr": "10.0.0.2", 00:20:04.721 "hostnqn": "nqn.2021-12.io.spdk:test", 00:20:04.721 "adrfam": "ipv4", 00:20:04.721 "trsvcid": "8009", 00:20:04.721 "wait_for_attach": true, 00:20:04.721 "method": "bdev_nvme_start_discovery", 00:20:04.721 "req_id": 1 00:20:04.721 } 00:20:04.721 Got JSON-RPC error response 00:20:04.721 response: 00:20:04.721 { 00:20:04.721 "code": -17, 00:20:04.721 "message": "File exists" 00:20:04.721 } 00:20:04.721 16:12:34 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:20:04.721 16:12:34 -- common/autotest_common.sh@641 -- # es=1 00:20:04.721 16:12:34 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:04.721 16:12:34 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:04.721 16:12:34 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:04.721 16:12:34 -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:20:04.721 16:12:34 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:04.721 16:12:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.721 16:12:34 -- common/autotest_common.sh@10 -- # set +x 00:20:04.721 16:12:34 -- host/discovery.sh@67 -- # xargs 00:20:04.721 16:12:34 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:20:04.721 16:12:34 -- host/discovery.sh@67 -- # sort 00:20:04.721 16:12:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.721 16:12:34 -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:20:04.721 16:12:34 -- host/discovery.sh@152 -- # get_bdev_list 00:20:04.721 16:12:34 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:04.721 16:12:34 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:04.721 16:12:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.721 16:12:34 -- common/autotest_common.sh@10 -- # set +x 00:20:04.721 16:12:34 -- host/discovery.sh@55 -- # sort 00:20:04.721 16:12:34 -- host/discovery.sh@55 -- # xargs 00:20:04.721 16:12:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.721 16:12:34 -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:04.721 16:12:34 -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:20:04.721 16:12:34 -- common/autotest_common.sh@638 -- # local es=0 00:20:04.721 16:12:34 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:20:04.721 16:12:34 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:20:04.721 16:12:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:04.721 16:12:34 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:20:04.721 16:12:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:04.721 16:12:34 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:20:04.721 16:12:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.721 16:12:34 -- common/autotest_common.sh@10 -- # set +x 00:20:05.686 [2024-04-15 16:12:35.576935] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:05.686 [2024-04-15 16:12:35.577058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:05.686 [2024-04-15 16:12:35.577096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:05.686 [2024-04-15 16:12:35.577111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d8590 with addr=10.0.0.2, port=8010 00:20:05.686 [2024-04-15 16:12:35.577134] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:20:05.686 [2024-04-15 16:12:35.577144] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:20:05.686 [2024-04-15 16:12:35.577155] bdev_nvme.c:6960:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:20:06.619 [2024-04-15 16:12:36.576937] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.619 [2024-04-15 16:12:36.577042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.619 [2024-04-15 16:12:36.577079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.619 [2024-04-15 16:12:36.577094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170a9c0 with addr=10.0.0.2, port=8010 00:20:06.619 [2024-04-15 16:12:36.577120] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:20:06.619 [2024-04-15 16:12:36.577131] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:20:06.620 [2024-04-15 16:12:36.577142] bdev_nvme.c:6960:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:20:08.024 [2024-04-15 16:12:37.576799] bdev_nvme.c:6941:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:20:08.024 request: 00:20:08.024 { 00:20:08.024 "name": "nvme_second", 00:20:08.024 "trtype": "tcp", 00:20:08.024 "traddr": "10.0.0.2", 00:20:08.024 "hostnqn": "nqn.2021-12.io.spdk:test", 00:20:08.024 "adrfam": "ipv4", 00:20:08.024 "trsvcid": "8010", 00:20:08.024 "attach_timeout_ms": 3000, 00:20:08.024 "method": "bdev_nvme_start_discovery", 00:20:08.024 "req_id": 1 00:20:08.024 } 00:20:08.024 Got JSON-RPC error response 00:20:08.024 response: 00:20:08.024 { 00:20:08.024 "code": -110, 00:20:08.024 "message": "Connection timed out" 00:20:08.024 } 00:20:08.024 16:12:37 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:20:08.024 16:12:37 -- common/autotest_common.sh@641 -- # es=1 00:20:08.024 16:12:37 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:08.024 16:12:37 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:08.024 16:12:37 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:08.024 16:12:37 -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:20:08.024 16:12:37 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:08.024 16:12:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:08.024 16:12:37 -- common/autotest_common.sh@10 -- # set +x 00:20:08.024 16:12:37 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:20:08.024 16:12:37 -- host/discovery.sh@67 -- # sort 00:20:08.024 16:12:37 -- host/discovery.sh@67 -- # xargs 00:20:08.024 16:12:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:08.024 16:12:37 -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:20:08.024 16:12:37 -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:20:08.024 16:12:37 -- host/discovery.sh@161 -- # kill 88239 00:20:08.024 16:12:37 -- host/discovery.sh@162 -- # nvmftestfini 00:20:08.024 16:12:37 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:08.024 16:12:37 -- nvmf/common.sh@117 -- # sync 00:20:08.024 16:12:37 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:08.024 16:12:37 -- nvmf/common.sh@120 -- # set +e 00:20:08.024 16:12:37 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:08.024 16:12:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:08.024 rmmod nvme_tcp 00:20:08.024 rmmod nvme_fabrics 00:20:08.024 16:12:37 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:08.024 16:12:37 -- nvmf/common.sh@124 -- # set -e 00:20:08.024 16:12:37 -- nvmf/common.sh@125 -- # return 0 00:20:08.024 16:12:37 -- nvmf/common.sh@478 -- # '[' -n 88219 ']' 00:20:08.024 16:12:37 -- nvmf/common.sh@479 -- # killprocess 88219 00:20:08.024 16:12:37 -- common/autotest_common.sh@936 -- # '[' -z 88219 ']' 00:20:08.024 16:12:37 -- common/autotest_common.sh@940 -- # kill -0 88219 00:20:08.024 16:12:37 -- common/autotest_common.sh@941 -- # uname 00:20:08.024 16:12:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:08.024 16:12:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88219 00:20:08.024 16:12:37 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:08.024 16:12:37 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:08.024 16:12:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88219' 00:20:08.024 killing process with pid 88219 00:20:08.024 16:12:37 -- common/autotest_common.sh@955 -- # kill 88219 00:20:08.024 16:12:37 -- common/autotest_common.sh@960 -- # wait 88219 00:20:08.024 16:12:37 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:08.024 16:12:37 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:08.024 16:12:37 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:08.024 16:12:37 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:08.024 16:12:37 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:08.024 16:12:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:08.024 16:12:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:08.024 16:12:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:08.282 16:12:38 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:08.282 00:20:08.282 real 0m8.730s 00:20:08.282 user 0m16.439s 00:20:08.282 sys 0m2.286s 00:20:08.282 ************************************ 00:20:08.282 END TEST nvmf_discovery 00:20:08.282 ************************************ 00:20:08.282 16:12:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:08.282 16:12:38 -- common/autotest_common.sh@10 -- # set +x 00:20:08.282 16:12:38 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:20:08.282 16:12:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:08.282 16:12:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:08.282 16:12:38 -- common/autotest_common.sh@10 -- # set +x 00:20:08.282 ************************************ 00:20:08.282 START TEST nvmf_discovery_remove_ifc 00:20:08.282 ************************************ 00:20:08.282 16:12:38 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:20:08.282 * Looking for test storage... 00:20:08.282 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:08.282 16:12:38 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:08.282 16:12:38 -- nvmf/common.sh@7 -- # uname -s 00:20:08.282 16:12:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:08.282 16:12:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:08.282 16:12:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:08.282 16:12:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:08.282 16:12:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:08.282 16:12:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:08.282 16:12:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:08.282 16:12:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:08.282 16:12:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:08.282 16:12:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:08.541 16:12:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5cf30adc-e0ee-4255-9853-178d7d30983a 00:20:08.541 16:12:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=5cf30adc-e0ee-4255-9853-178d7d30983a 00:20:08.541 16:12:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:08.541 16:12:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:08.541 16:12:38 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:08.541 16:12:38 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:08.541 16:12:38 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:08.541 16:12:38 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:08.541 16:12:38 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:08.541 16:12:38 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:08.541 16:12:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.541 16:12:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.541 16:12:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.541 16:12:38 -- paths/export.sh@5 -- # export PATH 00:20:08.541 16:12:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.541 16:12:38 -- nvmf/common.sh@47 -- # : 0 00:20:08.541 16:12:38 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:08.541 16:12:38 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:08.541 16:12:38 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:08.541 16:12:38 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:08.541 16:12:38 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:08.541 16:12:38 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:08.541 16:12:38 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:08.541 16:12:38 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:08.541 16:12:38 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:20:08.541 16:12:38 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:20:08.541 16:12:38 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:20:08.541 16:12:38 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:20:08.541 16:12:38 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:20:08.541 16:12:38 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:20:08.541 16:12:38 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:20:08.541 16:12:38 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:08.541 16:12:38 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:08.541 16:12:38 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:08.541 16:12:38 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:08.541 16:12:38 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:08.541 16:12:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:08.541 16:12:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:08.541 16:12:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:08.541 16:12:38 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:20:08.541 16:12:38 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:20:08.541 16:12:38 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:20:08.541 16:12:38 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:20:08.541 16:12:38 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:20:08.541 16:12:38 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:20:08.541 16:12:38 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:08.541 16:12:38 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:08.541 16:12:38 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:08.541 16:12:38 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:08.541 16:12:38 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:08.541 16:12:38 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:08.541 16:12:38 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:08.541 16:12:38 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:08.541 16:12:38 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:08.541 16:12:38 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:08.541 16:12:38 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:08.541 16:12:38 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:08.541 16:12:38 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:08.541 16:12:38 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:08.541 Cannot find device "nvmf_tgt_br" 00:20:08.541 16:12:38 -- nvmf/common.sh@155 -- # true 00:20:08.541 16:12:38 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:08.541 Cannot find device "nvmf_tgt_br2" 00:20:08.541 16:12:38 -- nvmf/common.sh@156 -- # true 00:20:08.541 16:12:38 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:08.541 16:12:38 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:08.541 Cannot find device "nvmf_tgt_br" 00:20:08.541 16:12:38 -- nvmf/common.sh@158 -- # true 00:20:08.541 16:12:38 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:08.541 Cannot find device "nvmf_tgt_br2" 00:20:08.541 16:12:38 -- nvmf/common.sh@159 -- # true 00:20:08.541 16:12:38 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:08.541 16:12:38 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:08.541 16:12:38 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:08.541 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:08.541 16:12:38 -- nvmf/common.sh@162 -- # true 00:20:08.541 16:12:38 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:08.541 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:08.541 16:12:38 -- nvmf/common.sh@163 -- # true 00:20:08.541 16:12:38 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:08.541 16:12:38 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:08.541 16:12:38 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:08.541 16:12:38 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:08.541 16:12:38 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:08.541 16:12:38 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:08.541 16:12:38 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:08.541 16:12:38 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:08.541 16:12:38 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:08.541 16:12:38 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:08.541 16:12:38 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:08.541 16:12:38 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:08.541 16:12:38 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:08.541 16:12:38 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:08.800 16:12:38 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:08.800 16:12:38 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:08.800 16:12:38 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:08.800 16:12:38 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:08.800 16:12:38 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:08.800 16:12:38 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:08.800 16:12:38 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:08.800 16:12:38 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:08.800 16:12:38 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:08.800 16:12:38 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:08.800 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:08.800 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:20:08.800 00:20:08.800 --- 10.0.0.2 ping statistics --- 00:20:08.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.800 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:20:08.800 16:12:38 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:08.800 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:08.800 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:20:08.800 00:20:08.800 --- 10.0.0.3 ping statistics --- 00:20:08.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.800 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:20:08.800 16:12:38 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:08.800 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:08.800 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:20:08.800 00:20:08.800 --- 10.0.0.1 ping statistics --- 00:20:08.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.800 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:20:08.800 16:12:38 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:08.800 16:12:38 -- nvmf/common.sh@422 -- # return 0 00:20:08.800 16:12:38 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:08.800 16:12:38 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:08.800 16:12:38 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:08.800 16:12:38 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:08.800 16:12:38 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:08.800 16:12:38 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:08.800 16:12:38 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:08.800 16:12:38 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:20:08.800 16:12:38 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:08.800 16:12:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:08.800 16:12:38 -- common/autotest_common.sh@10 -- # set +x 00:20:08.800 16:12:38 -- nvmf/common.sh@470 -- # nvmfpid=88686 00:20:08.800 16:12:38 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:08.800 16:12:38 -- nvmf/common.sh@471 -- # waitforlisten 88686 00:20:08.800 16:12:38 -- common/autotest_common.sh@817 -- # '[' -z 88686 ']' 00:20:08.800 16:12:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.800 16:12:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:08.800 16:12:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:08.800 16:12:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:08.800 16:12:38 -- common/autotest_common.sh@10 -- # set +x 00:20:08.800 [2024-04-15 16:12:38.710593] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:20:08.800 [2024-04-15 16:12:38.710680] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:09.057 [2024-04-15 16:12:38.850665] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.057 [2024-04-15 16:12:38.902274] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:09.057 [2024-04-15 16:12:38.902550] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:09.057 [2024-04-15 16:12:38.902684] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:09.057 [2024-04-15 16:12:38.902737] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:09.057 [2024-04-15 16:12:38.902828] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:09.057 [2024-04-15 16:12:38.902891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:09.057 16:12:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:09.057 16:12:39 -- common/autotest_common.sh@850 -- # return 0 00:20:09.057 16:12:39 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:09.057 16:12:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:09.057 16:12:39 -- common/autotest_common.sh@10 -- # set +x 00:20:09.315 16:12:39 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:09.315 16:12:39 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:20:09.315 16:12:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:09.315 16:12:39 -- common/autotest_common.sh@10 -- # set +x 00:20:09.315 [2024-04-15 16:12:39.058005] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:09.315 [2024-04-15 16:12:39.066161] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:20:09.315 null0 00:20:09.315 [2024-04-15 16:12:39.098127] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:09.315 16:12:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:09.315 16:12:39 -- host/discovery_remove_ifc.sh@59 -- # hostpid=88715 00:20:09.315 16:12:39 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 88715 /tmp/host.sock 00:20:09.315 16:12:39 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:20:09.315 16:12:39 -- common/autotest_common.sh@817 -- # '[' -z 88715 ']' 00:20:09.315 16:12:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:20:09.315 16:12:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:09.315 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:20:09.315 16:12:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:20:09.315 16:12:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:09.315 16:12:39 -- common/autotest_common.sh@10 -- # set +x 00:20:09.315 [2024-04-15 16:12:39.172690] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:20:09.315 [2024-04-15 16:12:39.172815] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88715 ] 00:20:09.573 [2024-04-15 16:12:39.320894] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.573 [2024-04-15 16:12:39.384182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:10.509 16:12:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:10.509 16:12:40 -- common/autotest_common.sh@850 -- # return 0 00:20:10.509 16:12:40 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:10.509 16:12:40 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:20:10.509 16:12:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:10.509 16:12:40 -- common/autotest_common.sh@10 -- # set +x 00:20:10.509 16:12:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:10.509 16:12:40 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:20:10.509 16:12:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:10.509 16:12:40 -- common/autotest_common.sh@10 -- # set +x 00:20:10.509 16:12:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:10.509 16:12:40 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:20:10.509 16:12:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:10.509 16:12:40 -- common/autotest_common.sh@10 -- # set +x 00:20:11.443 [2024-04-15 16:12:41.262224] bdev_nvme.c:6898:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:20:11.443 [2024-04-15 16:12:41.262259] bdev_nvme.c:6978:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:20:11.443 [2024-04-15 16:12:41.262274] bdev_nvme.c:6861:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:11.443 [2024-04-15 16:12:41.268260] bdev_nvme.c:6827:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:20:11.443 [2024-04-15 16:12:41.324533] bdev_nvme.c:7688:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:20:11.443 [2024-04-15 16:12:41.324622] bdev_nvme.c:7688:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:20:11.443 [2024-04-15 16:12:41.324645] bdev_nvme.c:7688:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:20:11.443 [2024-04-15 16:12:41.324664] bdev_nvme.c:6717:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:20:11.443 [2024-04-15 16:12:41.324691] bdev_nvme.c:6676:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:20:11.443 16:12:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:11.443 16:12:41 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:20:11.443 16:12:41 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:11.443 16:12:41 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:11.443 16:12:41 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:11.443 16:12:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:11.443 16:12:41 -- common/autotest_common.sh@10 -- # set +x 00:20:11.443 16:12:41 -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:11.443 [2024-04-15 16:12:41.331560] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x20a7490 was disconnected and freed. delete nvme_qpair. 00:20:11.443 16:12:41 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:11.443 16:12:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:11.443 16:12:41 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:20:11.443 16:12:41 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:20:11.443 16:12:41 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:20:11.443 16:12:41 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:20:11.443 16:12:41 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:11.443 16:12:41 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:11.443 16:12:41 -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:11.443 16:12:41 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:11.443 16:12:41 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:11.443 16:12:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:11.443 16:12:41 -- common/autotest_common.sh@10 -- # set +x 00:20:11.701 16:12:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:11.701 16:12:41 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:11.701 16:12:41 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:12.635 16:12:42 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:12.635 16:12:42 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:12.635 16:12:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:12.635 16:12:42 -- common/autotest_common.sh@10 -- # set +x 00:20:12.635 16:12:42 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:12.635 16:12:42 -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:12.635 16:12:42 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:12.635 16:12:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:12.635 16:12:42 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:12.635 16:12:42 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:13.568 16:12:43 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:13.568 16:12:43 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:13.568 16:12:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:13.568 16:12:43 -- common/autotest_common.sh@10 -- # set +x 00:20:13.568 16:12:43 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:13.568 16:12:43 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:13.568 16:12:43 -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:13.827 16:12:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:13.827 16:12:43 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:13.827 16:12:43 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:14.763 16:12:44 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:14.763 16:12:44 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:14.763 16:12:44 -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:14.763 16:12:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.763 16:12:44 -- common/autotest_common.sh@10 -- # set +x 00:20:14.763 16:12:44 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:14.763 16:12:44 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:14.763 16:12:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.763 16:12:44 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:14.763 16:12:44 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:15.697 16:12:45 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:15.697 16:12:45 -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:15.697 16:12:45 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:15.697 16:12:45 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:15.697 16:12:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:15.697 16:12:45 -- common/autotest_common.sh@10 -- # set +x 00:20:15.697 16:12:45 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:15.697 16:12:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:15.955 16:12:45 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:15.955 16:12:45 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:16.889 16:12:46 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:16.889 16:12:46 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:16.889 16:12:46 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:16.889 16:12:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:16.889 16:12:46 -- common/autotest_common.sh@10 -- # set +x 00:20:16.889 16:12:46 -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:16.889 16:12:46 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:16.889 16:12:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:16.889 16:12:46 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:16.889 16:12:46 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:16.889 [2024-04-15 16:12:46.752461] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:20:16.889 [2024-04-15 16:12:46.752527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:16.889 [2024-04-15 16:12:46.752543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.889 [2024-04-15 16:12:46.752557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:16.889 [2024-04-15 16:12:46.752568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.889 [2024-04-15 16:12:46.752586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:16.889 [2024-04-15 16:12:46.752596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.889 [2024-04-15 16:12:46.752608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:16.889 [2024-04-15 16:12:46.752619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.889 [2024-04-15 16:12:46.752630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:16.889 [2024-04-15 16:12:46.752642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:16.889 [2024-04-15 16:12:46.752653] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2081f40 is same with the state(5) to be set 00:20:16.889 [2024-04-15 16:12:46.762456] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2081f40 (9): Bad file descriptor 00:20:16.889 [2024-04-15 16:12:46.772492] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:17.823 16:12:47 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:17.823 16:12:47 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:17.823 16:12:47 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:17.823 16:12:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:17.823 16:12:47 -- common/autotest_common.sh@10 -- # set +x 00:20:17.823 16:12:47 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:17.823 16:12:47 -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:18.081 [2024-04-15 16:12:47.801636] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:20:19.015 [2024-04-15 16:12:48.825677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:20:19.950 [2024-04-15 16:12:49.849676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:20:19.950 [2024-04-15 16:12:49.849835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2081f40 with addr=10.0.0.2, port=4420 00:20:19.950 [2024-04-15 16:12:49.849882] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2081f40 is same with the state(5) to be set 00:20:19.950 [2024-04-15 16:12:49.850882] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2081f40 (9): Bad file descriptor 00:20:19.950 [2024-04-15 16:12:49.850981] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.950 [2024-04-15 16:12:49.851040] bdev_nvme.c:6649:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:20:19.950 [2024-04-15 16:12:49.851116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.950 [2024-04-15 16:12:49.851152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.950 [2024-04-15 16:12:49.851188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.950 [2024-04-15 16:12:49.851217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.950 [2024-04-15 16:12:49.851247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.950 [2024-04-15 16:12:49.851277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.950 [2024-04-15 16:12:49.851307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.950 [2024-04-15 16:12:49.851337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.950 [2024-04-15 16:12:49.851368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.950 [2024-04-15 16:12:49.851396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.950 [2024-04-15 16:12:49.851425] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:20:19.951 [2024-04-15 16:12:49.851463] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2055520 (9): Bad file descriptor 00:20:19.951 [2024-04-15 16:12:49.852018] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:20:19.951 [2024-04-15 16:12:49.852068] nvme_ctrlr.c:1148:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:20:19.951 16:12:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:19.951 16:12:49 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:19.951 16:12:49 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:21.343 16:12:50 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:21.343 16:12:50 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:21.343 16:12:50 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:21.343 16:12:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:21.343 16:12:50 -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:21.343 16:12:50 -- common/autotest_common.sh@10 -- # set +x 00:20:21.343 16:12:50 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:21.343 16:12:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:21.343 16:12:50 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:20:21.343 16:12:50 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:21.343 16:12:50 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:21.343 16:12:50 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:20:21.343 16:12:50 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:21.343 16:12:50 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:21.343 16:12:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:21.343 16:12:50 -- common/autotest_common.sh@10 -- # set +x 00:20:21.343 16:12:50 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:21.343 16:12:50 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:21.343 16:12:50 -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:21.343 16:12:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:21.343 16:12:51 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:20:21.343 16:12:51 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:21.909 [2024-04-15 16:12:51.857875] bdev_nvme.c:6898:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:20:21.909 [2024-04-15 16:12:51.857917] bdev_nvme.c:6978:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:20:21.909 [2024-04-15 16:12:51.857935] bdev_nvme.c:6861:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:21.909 [2024-04-15 16:12:51.863907] bdev_nvme.c:6827:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:20:22.168 [2024-04-15 16:12:51.919064] bdev_nvme.c:7688:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:20:22.168 [2024-04-15 16:12:51.919126] bdev_nvme.c:7688:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:20:22.168 [2024-04-15 16:12:51.919145] bdev_nvme.c:7688:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:20:22.168 [2024-04-15 16:12:51.919162] bdev_nvme.c:6717:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:20:22.168 [2024-04-15 16:12:51.919172] bdev_nvme.c:6676:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:20:22.168 [2024-04-15 16:12:51.926632] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x205cbb0 was disconnected and freed. delete nvme_qpair. 00:20:22.168 16:12:52 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:22.168 16:12:52 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:22.168 16:12:52 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:22.168 16:12:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:22.168 16:12:52 -- common/autotest_common.sh@10 -- # set +x 00:20:22.168 16:12:52 -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:22.168 16:12:52 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:22.168 16:12:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:22.168 16:12:52 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:20:22.168 16:12:52 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:20:22.168 16:12:52 -- host/discovery_remove_ifc.sh@90 -- # killprocess 88715 00:20:22.168 16:12:52 -- common/autotest_common.sh@936 -- # '[' -z 88715 ']' 00:20:22.168 16:12:52 -- common/autotest_common.sh@940 -- # kill -0 88715 00:20:22.168 16:12:52 -- common/autotest_common.sh@941 -- # uname 00:20:22.168 16:12:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:22.168 16:12:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88715 00:20:22.168 killing process with pid 88715 00:20:22.168 16:12:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:22.168 16:12:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:22.168 16:12:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88715' 00:20:22.168 16:12:52 -- common/autotest_common.sh@955 -- # kill 88715 00:20:22.168 16:12:52 -- common/autotest_common.sh@960 -- # wait 88715 00:20:22.432 16:12:52 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:20:22.433 16:12:52 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:22.433 16:12:52 -- nvmf/common.sh@117 -- # sync 00:20:22.433 16:12:52 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:22.433 16:12:52 -- nvmf/common.sh@120 -- # set +e 00:20:22.433 16:12:52 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:22.433 16:12:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:22.433 rmmod nvme_tcp 00:20:22.433 rmmod nvme_fabrics 00:20:22.433 16:12:52 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:22.433 16:12:52 -- nvmf/common.sh@124 -- # set -e 00:20:22.433 16:12:52 -- nvmf/common.sh@125 -- # return 0 00:20:22.433 16:12:52 -- nvmf/common.sh@478 -- # '[' -n 88686 ']' 00:20:22.433 16:12:52 -- nvmf/common.sh@479 -- # killprocess 88686 00:20:22.433 16:12:52 -- common/autotest_common.sh@936 -- # '[' -z 88686 ']' 00:20:22.433 16:12:52 -- common/autotest_common.sh@940 -- # kill -0 88686 00:20:22.433 16:12:52 -- common/autotest_common.sh@941 -- # uname 00:20:22.433 16:12:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:22.433 16:12:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88686 00:20:22.433 16:12:52 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:22.433 16:12:52 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:22.433 16:12:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88686' 00:20:22.433 killing process with pid 88686 00:20:22.433 16:12:52 -- common/autotest_common.sh@955 -- # kill 88686 00:20:22.433 16:12:52 -- common/autotest_common.sh@960 -- # wait 88686 00:20:22.690 16:12:52 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:22.690 16:12:52 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:22.690 16:12:52 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:22.690 16:12:52 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:22.690 16:12:52 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:22.690 16:12:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:22.690 16:12:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:22.690 16:12:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:22.690 16:12:52 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:22.690 00:20:22.690 real 0m14.484s 00:20:22.690 user 0m22.759s 00:20:22.690 sys 0m3.175s 00:20:22.690 16:12:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:22.690 16:12:52 -- common/autotest_common.sh@10 -- # set +x 00:20:22.690 ************************************ 00:20:22.690 END TEST nvmf_discovery_remove_ifc 00:20:22.690 ************************************ 00:20:22.950 16:12:52 -- nvmf/nvmf.sh@101 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:20:22.950 16:12:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:22.950 16:12:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:22.950 16:12:52 -- common/autotest_common.sh@10 -- # set +x 00:20:22.950 ************************************ 00:20:22.950 START TEST nvmf_identify_kernel_target 00:20:22.950 ************************************ 00:20:22.950 16:12:52 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:20:22.950 * Looking for test storage... 00:20:22.950 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:22.950 16:12:52 -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:22.950 16:12:52 -- nvmf/common.sh@7 -- # uname -s 00:20:22.950 16:12:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:22.950 16:12:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:22.950 16:12:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:22.950 16:12:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:22.950 16:12:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:22.950 16:12:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:22.950 16:12:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:22.950 16:12:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:22.950 16:12:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:22.950 16:12:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:22.950 16:12:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5cf30adc-e0ee-4255-9853-178d7d30983a 00:20:22.950 16:12:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=5cf30adc-e0ee-4255-9853-178d7d30983a 00:20:22.950 16:12:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:22.950 16:12:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:22.950 16:12:52 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:22.950 16:12:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:22.950 16:12:52 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:22.950 16:12:52 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:22.950 16:12:52 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:22.950 16:12:52 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:22.950 16:12:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.950 16:12:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.950 16:12:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.950 16:12:52 -- paths/export.sh@5 -- # export PATH 00:20:22.950 16:12:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.950 16:12:52 -- nvmf/common.sh@47 -- # : 0 00:20:22.950 16:12:52 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:22.950 16:12:52 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:22.950 16:12:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:22.950 16:12:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:22.950 16:12:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:22.950 16:12:52 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:22.950 16:12:52 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:22.950 16:12:52 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:22.950 16:12:52 -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:20:22.950 16:12:52 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:22.950 16:12:52 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:22.950 16:12:52 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:22.950 16:12:52 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:22.950 16:12:52 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:22.950 16:12:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:22.950 16:12:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:22.950 16:12:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:22.950 16:12:52 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:20:22.950 16:12:52 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:20:22.950 16:12:52 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:20:22.950 16:12:52 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:20:22.950 16:12:52 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:20:22.950 16:12:52 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:20:22.950 16:12:52 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:22.950 16:12:52 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:22.950 16:12:52 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:22.950 16:12:52 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:22.950 16:12:52 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:22.950 16:12:52 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:22.950 16:12:52 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:22.950 16:12:52 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:22.950 16:12:52 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:22.950 16:12:52 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:22.950 16:12:52 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:22.950 16:12:52 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:22.950 16:12:52 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:22.950 16:12:52 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:23.208 Cannot find device "nvmf_tgt_br" 00:20:23.208 16:12:52 -- nvmf/common.sh@155 -- # true 00:20:23.208 16:12:52 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:23.208 Cannot find device "nvmf_tgt_br2" 00:20:23.208 16:12:52 -- nvmf/common.sh@156 -- # true 00:20:23.208 16:12:52 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:23.208 16:12:52 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:23.208 Cannot find device "nvmf_tgt_br" 00:20:23.208 16:12:52 -- nvmf/common.sh@158 -- # true 00:20:23.208 16:12:52 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:23.208 Cannot find device "nvmf_tgt_br2" 00:20:23.208 16:12:52 -- nvmf/common.sh@159 -- # true 00:20:23.208 16:12:52 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:23.208 16:12:53 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:23.208 16:12:53 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:23.208 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:23.208 16:12:53 -- nvmf/common.sh@162 -- # true 00:20:23.208 16:12:53 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:23.208 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:23.208 16:12:53 -- nvmf/common.sh@163 -- # true 00:20:23.208 16:12:53 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:23.208 16:12:53 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:23.208 16:12:53 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:23.208 16:12:53 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:23.208 16:12:53 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:23.208 16:12:53 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:23.208 16:12:53 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:23.208 16:12:53 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:23.208 16:12:53 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:23.208 16:12:53 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:23.208 16:12:53 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:23.208 16:12:53 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:23.208 16:12:53 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:23.208 16:12:53 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:23.208 16:12:53 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:23.208 16:12:53 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:23.208 16:12:53 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:23.466 16:12:53 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:23.466 16:12:53 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:23.466 16:12:53 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:23.466 16:12:53 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:23.466 16:12:53 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:23.466 16:12:53 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:23.466 16:12:53 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:23.466 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:23.466 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:20:23.466 00:20:23.466 --- 10.0.0.2 ping statistics --- 00:20:23.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.466 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:20:23.466 16:12:53 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:23.466 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:23.466 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:20:23.466 00:20:23.466 --- 10.0.0.3 ping statistics --- 00:20:23.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.466 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:20:23.466 16:12:53 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:23.466 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:23.466 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:20:23.466 00:20:23.466 --- 10.0.0.1 ping statistics --- 00:20:23.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.466 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:20:23.466 16:12:53 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:23.466 16:12:53 -- nvmf/common.sh@422 -- # return 0 00:20:23.466 16:12:53 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:23.466 16:12:53 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:23.466 16:12:53 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:23.466 16:12:53 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:23.466 16:12:53 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:23.466 16:12:53 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:23.466 16:12:53 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:23.466 16:12:53 -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:20:23.466 16:12:53 -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:20:23.466 16:12:53 -- nvmf/common.sh@717 -- # local ip 00:20:23.466 16:12:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:23.466 16:12:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:23.466 16:12:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:23.466 16:12:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:23.466 16:12:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:23.466 16:12:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:23.466 16:12:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:23.466 16:12:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:23.466 16:12:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:23.466 16:12:53 -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:20:23.466 16:12:53 -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:20:23.466 16:12:53 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:20:23.466 16:12:53 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:20:23.466 16:12:53 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:23.466 16:12:53 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:23.466 16:12:53 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:23.466 16:12:53 -- nvmf/common.sh@628 -- # local block nvme 00:20:23.466 16:12:53 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:20:23.466 16:12:53 -- nvmf/common.sh@631 -- # modprobe nvmet 00:20:23.466 16:12:53 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:23.466 16:12:53 -- nvmf/common.sh@636 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:23.725 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:23.982 Waiting for block devices as requested 00:20:23.982 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:23.982 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:24.240 16:12:53 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:20:24.240 16:12:53 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:24.240 16:12:53 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:20:24.240 16:12:53 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:20:24.240 16:12:53 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:24.240 16:12:53 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:24.240 16:12:53 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:20:24.240 16:12:53 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:20:24.240 16:12:53 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:20:24.240 No valid GPT data, bailing 00:20:24.240 16:12:54 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:24.240 16:12:54 -- scripts/common.sh@391 -- # pt= 00:20:24.240 16:12:54 -- scripts/common.sh@392 -- # return 1 00:20:24.240 16:12:54 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:20:24.240 16:12:54 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:20:24.240 16:12:54 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n2 ]] 00:20:24.240 16:12:54 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n2 00:20:24.240 16:12:54 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:20:24.240 16:12:54 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:20:24.240 16:12:54 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:24.240 16:12:54 -- nvmf/common.sh@642 -- # block_in_use nvme0n2 00:20:24.240 16:12:54 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:20:24.240 16:12:54 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:20:24.240 No valid GPT data, bailing 00:20:24.240 16:12:54 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:20:24.240 16:12:54 -- scripts/common.sh@391 -- # pt= 00:20:24.240 16:12:54 -- scripts/common.sh@392 -- # return 1 00:20:24.240 16:12:54 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n2 00:20:24.240 16:12:54 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:20:24.240 16:12:54 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n3 ]] 00:20:24.240 16:12:54 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n3 00:20:24.240 16:12:54 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:20:24.240 16:12:54 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:20:24.240 16:12:54 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:24.240 16:12:54 -- nvmf/common.sh@642 -- # block_in_use nvme0n3 00:20:24.240 16:12:54 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:20:24.240 16:12:54 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:20:24.499 No valid GPT data, bailing 00:20:24.499 16:12:54 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:20:24.499 16:12:54 -- scripts/common.sh@391 -- # pt= 00:20:24.499 16:12:54 -- scripts/common.sh@392 -- # return 1 00:20:24.499 16:12:54 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n3 00:20:24.499 16:12:54 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:20:24.499 16:12:54 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:24.499 16:12:54 -- nvmf/common.sh@641 -- # is_block_zoned nvme1n1 00:20:24.499 16:12:54 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:20:24.499 16:12:54 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:24.499 16:12:54 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:24.499 16:12:54 -- nvmf/common.sh@642 -- # block_in_use nvme1n1 00:20:24.499 16:12:54 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:20:24.499 16:12:54 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:20:24.499 No valid GPT data, bailing 00:20:24.499 16:12:54 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:24.499 16:12:54 -- scripts/common.sh@391 -- # pt= 00:20:24.499 16:12:54 -- scripts/common.sh@392 -- # return 1 00:20:24.499 16:12:54 -- nvmf/common.sh@642 -- # nvme=/dev/nvme1n1 00:20:24.499 16:12:54 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme1n1 ]] 00:20:24.499 16:12:54 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:24.499 16:12:54 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:24.499 16:12:54 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:24.499 16:12:54 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:20:24.499 16:12:54 -- nvmf/common.sh@656 -- # echo 1 00:20:24.499 16:12:54 -- nvmf/common.sh@657 -- # echo /dev/nvme1n1 00:20:24.499 16:12:54 -- nvmf/common.sh@658 -- # echo 1 00:20:24.499 16:12:54 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:20:24.499 16:12:54 -- nvmf/common.sh@661 -- # echo tcp 00:20:24.499 16:12:54 -- nvmf/common.sh@662 -- # echo 4420 00:20:24.499 16:12:54 -- nvmf/common.sh@663 -- # echo ipv4 00:20:24.499 16:12:54 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:24.499 16:12:54 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5cf30adc-e0ee-4255-9853-178d7d30983a --hostid=5cf30adc-e0ee-4255-9853-178d7d30983a -a 10.0.0.1 -t tcp -s 4420 00:20:24.499 00:20:24.499 Discovery Log Number of Records 2, Generation counter 2 00:20:24.499 =====Discovery Log Entry 0====== 00:20:24.499 trtype: tcp 00:20:24.499 adrfam: ipv4 00:20:24.499 subtype: current discovery subsystem 00:20:24.499 treq: not specified, sq flow control disable supported 00:20:24.499 portid: 1 00:20:24.499 trsvcid: 4420 00:20:24.499 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:24.499 traddr: 10.0.0.1 00:20:24.499 eflags: none 00:20:24.499 sectype: none 00:20:24.499 =====Discovery Log Entry 1====== 00:20:24.499 trtype: tcp 00:20:24.499 adrfam: ipv4 00:20:24.499 subtype: nvme subsystem 00:20:24.499 treq: not specified, sq flow control disable supported 00:20:24.499 portid: 1 00:20:24.499 trsvcid: 4420 00:20:24.499 subnqn: nqn.2016-06.io.spdk:testnqn 00:20:24.499 traddr: 10.0.0.1 00:20:24.499 eflags: none 00:20:24.499 sectype: none 00:20:24.499 16:12:54 -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:20:24.499 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:20:24.759 ===================================================== 00:20:24.759 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:24.759 ===================================================== 00:20:24.759 Controller Capabilities/Features 00:20:24.759 ================================ 00:20:24.759 Vendor ID: 0000 00:20:24.759 Subsystem Vendor ID: 0000 00:20:24.759 Serial Number: 5759879a617810dd7dd7 00:20:24.759 Model Number: Linux 00:20:24.759 Firmware Version: 6.5.12-2 00:20:24.759 Recommended Arb Burst: 0 00:20:24.759 IEEE OUI Identifier: 00 00 00 00:20:24.759 Multi-path I/O 00:20:24.759 May have multiple subsystem ports: No 00:20:24.759 May have multiple controllers: No 00:20:24.759 Associated with SR-IOV VF: No 00:20:24.759 Max Data Transfer Size: Unlimited 00:20:24.759 Max Number of Namespaces: 0 00:20:24.759 Max Number of I/O Queues: 1024 00:20:24.759 NVMe Specification Version (VS): 1.3 00:20:24.759 NVMe Specification Version (Identify): 1.3 00:20:24.759 Maximum Queue Entries: 1024 00:20:24.759 Contiguous Queues Required: No 00:20:24.759 Arbitration Mechanisms Supported 00:20:24.759 Weighted Round Robin: Not Supported 00:20:24.759 Vendor Specific: Not Supported 00:20:24.759 Reset Timeout: 7500 ms 00:20:24.759 Doorbell Stride: 4 bytes 00:20:24.759 NVM Subsystem Reset: Not Supported 00:20:24.759 Command Sets Supported 00:20:24.759 NVM Command Set: Supported 00:20:24.759 Boot Partition: Not Supported 00:20:24.759 Memory Page Size Minimum: 4096 bytes 00:20:24.759 Memory Page Size Maximum: 4096 bytes 00:20:24.759 Persistent Memory Region: Not Supported 00:20:24.759 Optional Asynchronous Events Supported 00:20:24.759 Namespace Attribute Notices: Not Supported 00:20:24.759 Firmware Activation Notices: Not Supported 00:20:24.759 ANA Change Notices: Not Supported 00:20:24.759 PLE Aggregate Log Change Notices: Not Supported 00:20:24.759 LBA Status Info Alert Notices: Not Supported 00:20:24.759 EGE Aggregate Log Change Notices: Not Supported 00:20:24.759 Normal NVM Subsystem Shutdown event: Not Supported 00:20:24.759 Zone Descriptor Change Notices: Not Supported 00:20:24.759 Discovery Log Change Notices: Supported 00:20:24.759 Controller Attributes 00:20:24.759 128-bit Host Identifier: Not Supported 00:20:24.759 Non-Operational Permissive Mode: Not Supported 00:20:24.759 NVM Sets: Not Supported 00:20:24.759 Read Recovery Levels: Not Supported 00:20:24.759 Endurance Groups: Not Supported 00:20:24.759 Predictable Latency Mode: Not Supported 00:20:24.759 Traffic Based Keep ALive: Not Supported 00:20:24.759 Namespace Granularity: Not Supported 00:20:24.759 SQ Associations: Not Supported 00:20:24.759 UUID List: Not Supported 00:20:24.759 Multi-Domain Subsystem: Not Supported 00:20:24.759 Fixed Capacity Management: Not Supported 00:20:24.759 Variable Capacity Management: Not Supported 00:20:24.759 Delete Endurance Group: Not Supported 00:20:24.759 Delete NVM Set: Not Supported 00:20:24.759 Extended LBA Formats Supported: Not Supported 00:20:24.759 Flexible Data Placement Supported: Not Supported 00:20:24.759 00:20:24.759 Controller Memory Buffer Support 00:20:24.759 ================================ 00:20:24.759 Supported: No 00:20:24.759 00:20:24.759 Persistent Memory Region Support 00:20:24.759 ================================ 00:20:24.759 Supported: No 00:20:24.759 00:20:24.759 Admin Command Set Attributes 00:20:24.759 ============================ 00:20:24.759 Security Send/Receive: Not Supported 00:20:24.759 Format NVM: Not Supported 00:20:24.759 Firmware Activate/Download: Not Supported 00:20:24.759 Namespace Management: Not Supported 00:20:24.759 Device Self-Test: Not Supported 00:20:24.759 Directives: Not Supported 00:20:24.759 NVMe-MI: Not Supported 00:20:24.759 Virtualization Management: Not Supported 00:20:24.759 Doorbell Buffer Config: Not Supported 00:20:24.759 Get LBA Status Capability: Not Supported 00:20:24.759 Command & Feature Lockdown Capability: Not Supported 00:20:24.759 Abort Command Limit: 1 00:20:24.759 Async Event Request Limit: 1 00:20:24.759 Number of Firmware Slots: N/A 00:20:24.759 Firmware Slot 1 Read-Only: N/A 00:20:24.759 Firmware Activation Without Reset: N/A 00:20:24.759 Multiple Update Detection Support: N/A 00:20:24.759 Firmware Update Granularity: No Information Provided 00:20:24.759 Per-Namespace SMART Log: No 00:20:24.759 Asymmetric Namespace Access Log Page: Not Supported 00:20:24.759 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:24.759 Command Effects Log Page: Not Supported 00:20:24.759 Get Log Page Extended Data: Supported 00:20:24.759 Telemetry Log Pages: Not Supported 00:20:24.759 Persistent Event Log Pages: Not Supported 00:20:24.759 Supported Log Pages Log Page: May Support 00:20:24.759 Commands Supported & Effects Log Page: Not Supported 00:20:24.759 Feature Identifiers & Effects Log Page:May Support 00:20:24.759 NVMe-MI Commands & Effects Log Page: May Support 00:20:24.759 Data Area 4 for Telemetry Log: Not Supported 00:20:24.759 Error Log Page Entries Supported: 1 00:20:24.759 Keep Alive: Not Supported 00:20:24.759 00:20:24.759 NVM Command Set Attributes 00:20:24.759 ========================== 00:20:24.759 Submission Queue Entry Size 00:20:24.759 Max: 1 00:20:24.759 Min: 1 00:20:24.759 Completion Queue Entry Size 00:20:24.759 Max: 1 00:20:24.759 Min: 1 00:20:24.759 Number of Namespaces: 0 00:20:24.759 Compare Command: Not Supported 00:20:24.759 Write Uncorrectable Command: Not Supported 00:20:24.759 Dataset Management Command: Not Supported 00:20:24.759 Write Zeroes Command: Not Supported 00:20:24.759 Set Features Save Field: Not Supported 00:20:24.759 Reservations: Not Supported 00:20:24.759 Timestamp: Not Supported 00:20:24.759 Copy: Not Supported 00:20:24.759 Volatile Write Cache: Not Present 00:20:24.759 Atomic Write Unit (Normal): 1 00:20:24.759 Atomic Write Unit (PFail): 1 00:20:24.759 Atomic Compare & Write Unit: 1 00:20:24.759 Fused Compare & Write: Not Supported 00:20:24.759 Scatter-Gather List 00:20:24.759 SGL Command Set: Supported 00:20:24.759 SGL Keyed: Not Supported 00:20:24.759 SGL Bit Bucket Descriptor: Not Supported 00:20:24.759 SGL Metadata Pointer: Not Supported 00:20:24.759 Oversized SGL: Not Supported 00:20:24.759 SGL Metadata Address: Not Supported 00:20:24.759 SGL Offset: Supported 00:20:24.759 Transport SGL Data Block: Not Supported 00:20:24.759 Replay Protected Memory Block: Not Supported 00:20:24.759 00:20:24.759 Firmware Slot Information 00:20:24.759 ========================= 00:20:24.759 Active slot: 0 00:20:24.759 00:20:24.759 00:20:24.759 Error Log 00:20:24.759 ========= 00:20:24.759 00:20:24.759 Active Namespaces 00:20:24.759 ================= 00:20:24.759 Discovery Log Page 00:20:24.759 ================== 00:20:24.759 Generation Counter: 2 00:20:24.759 Number of Records: 2 00:20:24.759 Record Format: 0 00:20:24.759 00:20:24.759 Discovery Log Entry 0 00:20:24.759 ---------------------- 00:20:24.759 Transport Type: 3 (TCP) 00:20:24.759 Address Family: 1 (IPv4) 00:20:24.759 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:24.759 Entry Flags: 00:20:24.759 Duplicate Returned Information: 0 00:20:24.759 Explicit Persistent Connection Support for Discovery: 0 00:20:24.759 Transport Requirements: 00:20:24.759 Secure Channel: Not Specified 00:20:24.759 Port ID: 1 (0x0001) 00:20:24.759 Controller ID: 65535 (0xffff) 00:20:24.759 Admin Max SQ Size: 32 00:20:24.759 Transport Service Identifier: 4420 00:20:24.759 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:24.759 Transport Address: 10.0.0.1 00:20:24.759 Discovery Log Entry 1 00:20:24.759 ---------------------- 00:20:24.759 Transport Type: 3 (TCP) 00:20:24.759 Address Family: 1 (IPv4) 00:20:24.759 Subsystem Type: 2 (NVM Subsystem) 00:20:24.759 Entry Flags: 00:20:24.759 Duplicate Returned Information: 0 00:20:24.759 Explicit Persistent Connection Support for Discovery: 0 00:20:24.759 Transport Requirements: 00:20:24.759 Secure Channel: Not Specified 00:20:24.759 Port ID: 1 (0x0001) 00:20:24.759 Controller ID: 65535 (0xffff) 00:20:24.759 Admin Max SQ Size: 32 00:20:24.759 Transport Service Identifier: 4420 00:20:24.759 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:20:24.759 Transport Address: 10.0.0.1 00:20:24.760 16:12:54 -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:25.019 get_feature(0x01) failed 00:20:25.019 get_feature(0x02) failed 00:20:25.019 get_feature(0x04) failed 00:20:25.019 ===================================================== 00:20:25.019 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:25.019 ===================================================== 00:20:25.019 Controller Capabilities/Features 00:20:25.019 ================================ 00:20:25.019 Vendor ID: 0000 00:20:25.019 Subsystem Vendor ID: 0000 00:20:25.019 Serial Number: 4a805e6a1dc7a46a505a 00:20:25.019 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:20:25.019 Firmware Version: 6.5.12-2 00:20:25.019 Recommended Arb Burst: 6 00:20:25.019 IEEE OUI Identifier: 00 00 00 00:20:25.019 Multi-path I/O 00:20:25.019 May have multiple subsystem ports: Yes 00:20:25.019 May have multiple controllers: Yes 00:20:25.019 Associated with SR-IOV VF: No 00:20:25.019 Max Data Transfer Size: Unlimited 00:20:25.019 Max Number of Namespaces: 1024 00:20:25.019 Max Number of I/O Queues: 128 00:20:25.019 NVMe Specification Version (VS): 1.3 00:20:25.019 NVMe Specification Version (Identify): 1.3 00:20:25.019 Maximum Queue Entries: 1024 00:20:25.019 Contiguous Queues Required: No 00:20:25.019 Arbitration Mechanisms Supported 00:20:25.019 Weighted Round Robin: Not Supported 00:20:25.019 Vendor Specific: Not Supported 00:20:25.019 Reset Timeout: 7500 ms 00:20:25.019 Doorbell Stride: 4 bytes 00:20:25.019 NVM Subsystem Reset: Not Supported 00:20:25.019 Command Sets Supported 00:20:25.019 NVM Command Set: Supported 00:20:25.019 Boot Partition: Not Supported 00:20:25.019 Memory Page Size Minimum: 4096 bytes 00:20:25.019 Memory Page Size Maximum: 4096 bytes 00:20:25.019 Persistent Memory Region: Not Supported 00:20:25.019 Optional Asynchronous Events Supported 00:20:25.020 Namespace Attribute Notices: Supported 00:20:25.020 Firmware Activation Notices: Not Supported 00:20:25.020 ANA Change Notices: Supported 00:20:25.020 PLE Aggregate Log Change Notices: Not Supported 00:20:25.020 LBA Status Info Alert Notices: Not Supported 00:20:25.020 EGE Aggregate Log Change Notices: Not Supported 00:20:25.020 Normal NVM Subsystem Shutdown event: Not Supported 00:20:25.020 Zone Descriptor Change Notices: Not Supported 00:20:25.020 Discovery Log Change Notices: Not Supported 00:20:25.020 Controller Attributes 00:20:25.020 128-bit Host Identifier: Supported 00:20:25.020 Non-Operational Permissive Mode: Not Supported 00:20:25.020 NVM Sets: Not Supported 00:20:25.020 Read Recovery Levels: Not Supported 00:20:25.020 Endurance Groups: Not Supported 00:20:25.020 Predictable Latency Mode: Not Supported 00:20:25.020 Traffic Based Keep ALive: Supported 00:20:25.020 Namespace Granularity: Not Supported 00:20:25.020 SQ Associations: Not Supported 00:20:25.020 UUID List: Not Supported 00:20:25.020 Multi-Domain Subsystem: Not Supported 00:20:25.020 Fixed Capacity Management: Not Supported 00:20:25.020 Variable Capacity Management: Not Supported 00:20:25.020 Delete Endurance Group: Not Supported 00:20:25.020 Delete NVM Set: Not Supported 00:20:25.020 Extended LBA Formats Supported: Not Supported 00:20:25.020 Flexible Data Placement Supported: Not Supported 00:20:25.020 00:20:25.020 Controller Memory Buffer Support 00:20:25.020 ================================ 00:20:25.020 Supported: No 00:20:25.020 00:20:25.020 Persistent Memory Region Support 00:20:25.020 ================================ 00:20:25.020 Supported: No 00:20:25.020 00:20:25.020 Admin Command Set Attributes 00:20:25.020 ============================ 00:20:25.020 Security Send/Receive: Not Supported 00:20:25.020 Format NVM: Not Supported 00:20:25.020 Firmware Activate/Download: Not Supported 00:20:25.020 Namespace Management: Not Supported 00:20:25.020 Device Self-Test: Not Supported 00:20:25.020 Directives: Not Supported 00:20:25.020 NVMe-MI: Not Supported 00:20:25.020 Virtualization Management: Not Supported 00:20:25.020 Doorbell Buffer Config: Not Supported 00:20:25.020 Get LBA Status Capability: Not Supported 00:20:25.020 Command & Feature Lockdown Capability: Not Supported 00:20:25.020 Abort Command Limit: 4 00:20:25.020 Async Event Request Limit: 4 00:20:25.020 Number of Firmware Slots: N/A 00:20:25.020 Firmware Slot 1 Read-Only: N/A 00:20:25.020 Firmware Activation Without Reset: N/A 00:20:25.020 Multiple Update Detection Support: N/A 00:20:25.020 Firmware Update Granularity: No Information Provided 00:20:25.020 Per-Namespace SMART Log: Yes 00:20:25.020 Asymmetric Namespace Access Log Page: Supported 00:20:25.020 ANA Transition Time : 10 sec 00:20:25.020 00:20:25.020 Asymmetric Namespace Access Capabilities 00:20:25.020 ANA Optimized State : Supported 00:20:25.020 ANA Non-Optimized State : Supported 00:20:25.020 ANA Inaccessible State : Supported 00:20:25.020 ANA Persistent Loss State : Supported 00:20:25.020 ANA Change State : Supported 00:20:25.020 ANAGRPID is not changed : No 00:20:25.020 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:20:25.020 00:20:25.020 ANA Group Identifier Maximum : 128 00:20:25.020 Number of ANA Group Identifiers : 128 00:20:25.020 Max Number of Allowed Namespaces : 1024 00:20:25.020 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:20:25.020 Command Effects Log Page: Supported 00:20:25.020 Get Log Page Extended Data: Supported 00:20:25.020 Telemetry Log Pages: Not Supported 00:20:25.020 Persistent Event Log Pages: Not Supported 00:20:25.020 Supported Log Pages Log Page: May Support 00:20:25.020 Commands Supported & Effects Log Page: Not Supported 00:20:25.020 Feature Identifiers & Effects Log Page:May Support 00:20:25.020 NVMe-MI Commands & Effects Log Page: May Support 00:20:25.020 Data Area 4 for Telemetry Log: Not Supported 00:20:25.020 Error Log Page Entries Supported: 128 00:20:25.020 Keep Alive: Supported 00:20:25.020 Keep Alive Granularity: 1000 ms 00:20:25.020 00:20:25.020 NVM Command Set Attributes 00:20:25.020 ========================== 00:20:25.020 Submission Queue Entry Size 00:20:25.020 Max: 64 00:20:25.020 Min: 64 00:20:25.020 Completion Queue Entry Size 00:20:25.020 Max: 16 00:20:25.020 Min: 16 00:20:25.020 Number of Namespaces: 1024 00:20:25.020 Compare Command: Not Supported 00:20:25.020 Write Uncorrectable Command: Not Supported 00:20:25.020 Dataset Management Command: Supported 00:20:25.020 Write Zeroes Command: Supported 00:20:25.020 Set Features Save Field: Not Supported 00:20:25.020 Reservations: Not Supported 00:20:25.020 Timestamp: Not Supported 00:20:25.020 Copy: Not Supported 00:20:25.020 Volatile Write Cache: Present 00:20:25.020 Atomic Write Unit (Normal): 1 00:20:25.020 Atomic Write Unit (PFail): 1 00:20:25.020 Atomic Compare & Write Unit: 1 00:20:25.020 Fused Compare & Write: Not Supported 00:20:25.020 Scatter-Gather List 00:20:25.020 SGL Command Set: Supported 00:20:25.020 SGL Keyed: Not Supported 00:20:25.020 SGL Bit Bucket Descriptor: Not Supported 00:20:25.020 SGL Metadata Pointer: Not Supported 00:20:25.020 Oversized SGL: Not Supported 00:20:25.020 SGL Metadata Address: Not Supported 00:20:25.020 SGL Offset: Supported 00:20:25.020 Transport SGL Data Block: Not Supported 00:20:25.020 Replay Protected Memory Block: Not Supported 00:20:25.020 00:20:25.020 Firmware Slot Information 00:20:25.020 ========================= 00:20:25.020 Active slot: 0 00:20:25.020 00:20:25.020 Asymmetric Namespace Access 00:20:25.020 =========================== 00:20:25.020 Change Count : 0 00:20:25.020 Number of ANA Group Descriptors : 1 00:20:25.020 ANA Group Descriptor : 0 00:20:25.020 ANA Group ID : 1 00:20:25.020 Number of NSID Values : 1 00:20:25.020 Change Count : 0 00:20:25.020 ANA State : 1 00:20:25.020 Namespace Identifier : 1 00:20:25.020 00:20:25.020 Commands Supported and Effects 00:20:25.020 ============================== 00:20:25.020 Admin Commands 00:20:25.020 -------------- 00:20:25.020 Get Log Page (02h): Supported 00:20:25.020 Identify (06h): Supported 00:20:25.020 Abort (08h): Supported 00:20:25.020 Set Features (09h): Supported 00:20:25.020 Get Features (0Ah): Supported 00:20:25.020 Asynchronous Event Request (0Ch): Supported 00:20:25.020 Keep Alive (18h): Supported 00:20:25.020 I/O Commands 00:20:25.020 ------------ 00:20:25.020 Flush (00h): Supported 00:20:25.020 Write (01h): Supported LBA-Change 00:20:25.020 Read (02h): Supported 00:20:25.020 Write Zeroes (08h): Supported LBA-Change 00:20:25.020 Dataset Management (09h): Supported 00:20:25.020 00:20:25.020 Error Log 00:20:25.020 ========= 00:20:25.020 Entry: 0 00:20:25.020 Error Count: 0x3 00:20:25.020 Submission Queue Id: 0x0 00:20:25.020 Command Id: 0x5 00:20:25.020 Phase Bit: 0 00:20:25.020 Status Code: 0x2 00:20:25.020 Status Code Type: 0x0 00:20:25.020 Do Not Retry: 1 00:20:25.020 Error Location: 0x28 00:20:25.020 LBA: 0x0 00:20:25.020 Namespace: 0x0 00:20:25.020 Vendor Log Page: 0x0 00:20:25.020 ----------- 00:20:25.020 Entry: 1 00:20:25.020 Error Count: 0x2 00:20:25.020 Submission Queue Id: 0x0 00:20:25.020 Command Id: 0x5 00:20:25.020 Phase Bit: 0 00:20:25.020 Status Code: 0x2 00:20:25.020 Status Code Type: 0x0 00:20:25.020 Do Not Retry: 1 00:20:25.020 Error Location: 0x28 00:20:25.020 LBA: 0x0 00:20:25.020 Namespace: 0x0 00:20:25.020 Vendor Log Page: 0x0 00:20:25.020 ----------- 00:20:25.020 Entry: 2 00:20:25.020 Error Count: 0x1 00:20:25.020 Submission Queue Id: 0x0 00:20:25.020 Command Id: 0x4 00:20:25.020 Phase Bit: 0 00:20:25.020 Status Code: 0x2 00:20:25.020 Status Code Type: 0x0 00:20:25.020 Do Not Retry: 1 00:20:25.020 Error Location: 0x28 00:20:25.020 LBA: 0x0 00:20:25.020 Namespace: 0x0 00:20:25.020 Vendor Log Page: 0x0 00:20:25.020 00:20:25.020 Number of Queues 00:20:25.020 ================ 00:20:25.020 Number of I/O Submission Queues: 128 00:20:25.020 Number of I/O Completion Queues: 128 00:20:25.020 00:20:25.020 ZNS Specific Controller Data 00:20:25.020 ============================ 00:20:25.021 Zone Append Size Limit: 0 00:20:25.021 00:20:25.021 00:20:25.021 Active Namespaces 00:20:25.021 ================= 00:20:25.021 get_feature(0x05) failed 00:20:25.021 Namespace ID:1 00:20:25.021 Command Set Identifier: NVM (00h) 00:20:25.021 Deallocate: Supported 00:20:25.021 Deallocated/Unwritten Error: Not Supported 00:20:25.021 Deallocated Read Value: Unknown 00:20:25.021 Deallocate in Write Zeroes: Not Supported 00:20:25.021 Deallocated Guard Field: 0xFFFF 00:20:25.021 Flush: Supported 00:20:25.021 Reservation: Not Supported 00:20:25.021 Namespace Sharing Capabilities: Multiple Controllers 00:20:25.021 Size (in LBAs): 1310720 (5GiB) 00:20:25.021 Capacity (in LBAs): 1310720 (5GiB) 00:20:25.021 Utilization (in LBAs): 1310720 (5GiB) 00:20:25.021 UUID: 204d3d64-221d-46dc-b1f6-bd41f768d955 00:20:25.021 Thin Provisioning: Not Supported 00:20:25.021 Per-NS Atomic Units: Yes 00:20:25.021 Atomic Boundary Size (Normal): 0 00:20:25.021 Atomic Boundary Size (PFail): 0 00:20:25.021 Atomic Boundary Offset: 0 00:20:25.021 NGUID/EUI64 Never Reused: No 00:20:25.021 ANA group ID: 1 00:20:25.021 Namespace Write Protected: No 00:20:25.021 Number of LBA Formats: 1 00:20:25.021 Current LBA Format: LBA Format #00 00:20:25.021 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:20:25.021 00:20:25.021 16:12:54 -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:20:25.021 16:12:54 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:25.021 16:12:54 -- nvmf/common.sh@117 -- # sync 00:20:25.021 16:12:54 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:25.021 16:12:54 -- nvmf/common.sh@120 -- # set +e 00:20:25.021 16:12:54 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:25.021 16:12:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:25.021 rmmod nvme_tcp 00:20:25.021 rmmod nvme_fabrics 00:20:25.021 16:12:54 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:25.021 16:12:54 -- nvmf/common.sh@124 -- # set -e 00:20:25.021 16:12:54 -- nvmf/common.sh@125 -- # return 0 00:20:25.021 16:12:54 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:20:25.021 16:12:54 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:25.021 16:12:54 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:25.021 16:12:54 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:25.021 16:12:54 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:25.021 16:12:54 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:25.021 16:12:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:25.021 16:12:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:25.021 16:12:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:25.021 16:12:54 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:25.021 16:12:54 -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:20:25.021 16:12:54 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:20:25.021 16:12:54 -- nvmf/common.sh@675 -- # echo 0 00:20:25.021 16:12:54 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:25.021 16:12:54 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:25.021 16:12:54 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:25.021 16:12:54 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:25.021 16:12:54 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:20:25.021 16:12:54 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:20:25.021 16:12:54 -- nvmf/common.sh@687 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:25.988 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:25.988 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:25.988 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:25.988 00:20:25.988 real 0m3.175s 00:20:25.988 user 0m1.037s 00:20:25.988 sys 0m1.602s 00:20:25.988 16:12:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:25.988 ************************************ 00:20:25.988 END TEST nvmf_identify_kernel_target 00:20:25.988 16:12:55 -- common/autotest_common.sh@10 -- # set +x 00:20:25.988 ************************************ 00:20:26.248 16:12:55 -- nvmf/nvmf.sh@102 -- # run_test nvmf_auth /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:20:26.248 16:12:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:26.248 16:12:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:26.248 16:12:55 -- common/autotest_common.sh@10 -- # set +x 00:20:26.248 ************************************ 00:20:26.248 START TEST nvmf_auth 00:20:26.248 ************************************ 00:20:26.248 16:12:56 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:20:26.248 * Looking for test storage... 00:20:26.248 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:26.248 16:12:56 -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:26.248 16:12:56 -- nvmf/common.sh@7 -- # uname -s 00:20:26.248 16:12:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:26.248 16:12:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:26.248 16:12:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:26.248 16:12:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:26.248 16:12:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:26.248 16:12:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:26.248 16:12:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:26.248 16:12:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:26.248 16:12:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:26.248 16:12:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:26.248 16:12:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5cf30adc-e0ee-4255-9853-178d7d30983a 00:20:26.248 16:12:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=5cf30adc-e0ee-4255-9853-178d7d30983a 00:20:26.248 16:12:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:26.248 16:12:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:26.248 16:12:56 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:26.248 16:12:56 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:26.248 16:12:56 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:26.248 16:12:56 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:26.248 16:12:56 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:26.248 16:12:56 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:26.248 16:12:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.248 16:12:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.248 16:12:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.248 16:12:56 -- paths/export.sh@5 -- # export PATH 00:20:26.248 16:12:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.248 16:12:56 -- nvmf/common.sh@47 -- # : 0 00:20:26.248 16:12:56 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:26.248 16:12:56 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:26.248 16:12:56 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:26.248 16:12:56 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:26.248 16:12:56 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:26.248 16:12:56 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:26.248 16:12:56 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:26.248 16:12:56 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:26.248 16:12:56 -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:20:26.248 16:12:56 -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:20:26.248 16:12:56 -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:20:26.248 16:12:56 -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:20:26.248 16:12:56 -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:26.248 16:12:56 -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:26.248 16:12:56 -- host/auth.sh@21 -- # keys=() 00:20:26.248 16:12:56 -- host/auth.sh@77 -- # nvmftestinit 00:20:26.248 16:12:56 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:26.248 16:12:56 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:26.248 16:12:56 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:26.248 16:12:56 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:26.248 16:12:56 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:26.248 16:12:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:26.248 16:12:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:26.248 16:12:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:26.248 16:12:56 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:20:26.248 16:12:56 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:20:26.248 16:12:56 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:20:26.248 16:12:56 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:20:26.248 16:12:56 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:20:26.248 16:12:56 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:20:26.248 16:12:56 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:26.248 16:12:56 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:26.248 16:12:56 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:26.248 16:12:56 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:26.248 16:12:56 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:26.248 16:12:56 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:26.248 16:12:56 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:26.248 16:12:56 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:26.248 16:12:56 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:26.248 16:12:56 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:26.248 16:12:56 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:26.248 16:12:56 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:26.248 16:12:56 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:26.248 16:12:56 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:26.507 Cannot find device "nvmf_tgt_br" 00:20:26.507 16:12:56 -- nvmf/common.sh@155 -- # true 00:20:26.507 16:12:56 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:26.507 Cannot find device "nvmf_tgt_br2" 00:20:26.507 16:12:56 -- nvmf/common.sh@156 -- # true 00:20:26.507 16:12:56 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:26.507 16:12:56 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:26.507 Cannot find device "nvmf_tgt_br" 00:20:26.507 16:12:56 -- nvmf/common.sh@158 -- # true 00:20:26.507 16:12:56 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:26.507 Cannot find device "nvmf_tgt_br2" 00:20:26.507 16:12:56 -- nvmf/common.sh@159 -- # true 00:20:26.507 16:12:56 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:26.507 16:12:56 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:26.507 16:12:56 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:26.507 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:26.507 16:12:56 -- nvmf/common.sh@162 -- # true 00:20:26.507 16:12:56 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:26.507 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:26.507 16:12:56 -- nvmf/common.sh@163 -- # true 00:20:26.507 16:12:56 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:26.507 16:12:56 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:26.507 16:12:56 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:26.507 16:12:56 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:26.507 16:12:56 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:26.507 16:12:56 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:26.507 16:12:56 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:26.507 16:12:56 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:26.507 16:12:56 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:26.507 16:12:56 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:26.507 16:12:56 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:26.507 16:12:56 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:26.507 16:12:56 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:26.507 16:12:56 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:26.766 16:12:56 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:26.766 16:12:56 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:26.766 16:12:56 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:26.766 16:12:56 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:26.766 16:12:56 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:26.766 16:12:56 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:26.766 16:12:56 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:26.766 16:12:56 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:26.766 16:12:56 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:26.766 16:12:56 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:26.766 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:26.766 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:20:26.766 00:20:26.766 --- 10.0.0.2 ping statistics --- 00:20:26.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.766 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:20:26.766 16:12:56 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:26.766 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:26.766 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:20:26.766 00:20:26.766 --- 10.0.0.3 ping statistics --- 00:20:26.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.766 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:20:26.766 16:12:56 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:26.766 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:26.766 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:20:26.766 00:20:26.766 --- 10.0.0.1 ping statistics --- 00:20:26.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.766 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:20:26.766 16:12:56 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:26.766 16:12:56 -- nvmf/common.sh@422 -- # return 0 00:20:26.766 16:12:56 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:26.766 16:12:56 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:26.766 16:12:56 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:26.766 16:12:56 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:26.766 16:12:56 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:26.766 16:12:56 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:26.766 16:12:56 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:26.766 16:12:56 -- host/auth.sh@78 -- # nvmfappstart -L nvme_auth 00:20:26.766 16:12:56 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:26.766 16:12:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:26.766 16:12:56 -- common/autotest_common.sh@10 -- # set +x 00:20:26.766 16:12:56 -- nvmf/common.sh@470 -- # nvmfpid=89619 00:20:26.766 16:12:56 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:20:26.766 16:12:56 -- nvmf/common.sh@471 -- # waitforlisten 89619 00:20:26.766 16:12:56 -- common/autotest_common.sh@817 -- # '[' -z 89619 ']' 00:20:26.766 16:12:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:26.766 16:12:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:26.766 16:12:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:26.766 16:12:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:26.766 16:12:56 -- common/autotest_common.sh@10 -- # set +x 00:20:27.023 16:12:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:27.023 16:12:56 -- common/autotest_common.sh@850 -- # return 0 00:20:27.023 16:12:56 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:27.023 16:12:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:27.023 16:12:56 -- common/autotest_common.sh@10 -- # set +x 00:20:27.282 16:12:56 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:27.283 16:12:56 -- host/auth.sh@79 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:20:27.283 16:12:56 -- host/auth.sh@81 -- # gen_key null 32 00:20:27.283 16:12:56 -- host/auth.sh@53 -- # local digest len file key 00:20:27.283 16:12:56 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:27.283 16:12:57 -- host/auth.sh@54 -- # local -A digests 00:20:27.283 16:12:57 -- host/auth.sh@56 -- # digest=null 00:20:27.283 16:12:57 -- host/auth.sh@56 -- # len=32 00:20:27.283 16:12:57 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:27.283 16:12:57 -- host/auth.sh@57 -- # key=9a2f42f20678832aa531280a3d5c7c62 00:20:27.283 16:12:57 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:20:27.283 16:12:57 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.lN8 00:20:27.283 16:12:57 -- host/auth.sh@59 -- # format_dhchap_key 9a2f42f20678832aa531280a3d5c7c62 0 00:20:27.283 16:12:57 -- nvmf/common.sh@708 -- # format_key DHHC-1 9a2f42f20678832aa531280a3d5c7c62 0 00:20:27.283 16:12:57 -- nvmf/common.sh@691 -- # local prefix key digest 00:20:27.283 16:12:57 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:20:27.283 16:12:57 -- nvmf/common.sh@693 -- # key=9a2f42f20678832aa531280a3d5c7c62 00:20:27.283 16:12:57 -- nvmf/common.sh@693 -- # digest=0 00:20:27.283 16:12:57 -- nvmf/common.sh@694 -- # python - 00:20:27.283 16:12:57 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.lN8 00:20:27.283 16:12:57 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.lN8 00:20:27.283 16:12:57 -- host/auth.sh@81 -- # keys[0]=/tmp/spdk.key-null.lN8 00:20:27.283 16:12:57 -- host/auth.sh@82 -- # gen_key null 48 00:20:27.283 16:12:57 -- host/auth.sh@53 -- # local digest len file key 00:20:27.283 16:12:57 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:27.283 16:12:57 -- host/auth.sh@54 -- # local -A digests 00:20:27.283 16:12:57 -- host/auth.sh@56 -- # digest=null 00:20:27.283 16:12:57 -- host/auth.sh@56 -- # len=48 00:20:27.283 16:12:57 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:27.283 16:12:57 -- host/auth.sh@57 -- # key=bbff0558ff20ab9fb2fc59ddc497278c0db8ac7358880c83 00:20:27.283 16:12:57 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:20:27.283 16:12:57 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.E3D 00:20:27.283 16:12:57 -- host/auth.sh@59 -- # format_dhchap_key bbff0558ff20ab9fb2fc59ddc497278c0db8ac7358880c83 0 00:20:27.283 16:12:57 -- nvmf/common.sh@708 -- # format_key DHHC-1 bbff0558ff20ab9fb2fc59ddc497278c0db8ac7358880c83 0 00:20:27.283 16:12:57 -- nvmf/common.sh@691 -- # local prefix key digest 00:20:27.283 16:12:57 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:20:27.283 16:12:57 -- nvmf/common.sh@693 -- # key=bbff0558ff20ab9fb2fc59ddc497278c0db8ac7358880c83 00:20:27.283 16:12:57 -- nvmf/common.sh@693 -- # digest=0 00:20:27.283 16:12:57 -- nvmf/common.sh@694 -- # python - 00:20:27.283 16:12:57 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.E3D 00:20:27.283 16:12:57 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.E3D 00:20:27.283 16:12:57 -- host/auth.sh@82 -- # keys[1]=/tmp/spdk.key-null.E3D 00:20:27.283 16:12:57 -- host/auth.sh@83 -- # gen_key sha256 32 00:20:27.283 16:12:57 -- host/auth.sh@53 -- # local digest len file key 00:20:27.283 16:12:57 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:27.283 16:12:57 -- host/auth.sh@54 -- # local -A digests 00:20:27.283 16:12:57 -- host/auth.sh@56 -- # digest=sha256 00:20:27.283 16:12:57 -- host/auth.sh@56 -- # len=32 00:20:27.283 16:12:57 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:27.283 16:12:57 -- host/auth.sh@57 -- # key=a4a90a759f8736f0ee03b56caf489d23 00:20:27.283 16:12:57 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha256.XXX 00:20:27.283 16:12:57 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha256.KPt 00:20:27.283 16:12:57 -- host/auth.sh@59 -- # format_dhchap_key a4a90a759f8736f0ee03b56caf489d23 1 00:20:27.283 16:12:57 -- nvmf/common.sh@708 -- # format_key DHHC-1 a4a90a759f8736f0ee03b56caf489d23 1 00:20:27.283 16:12:57 -- nvmf/common.sh@691 -- # local prefix key digest 00:20:27.283 16:12:57 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:20:27.283 16:12:57 -- nvmf/common.sh@693 -- # key=a4a90a759f8736f0ee03b56caf489d23 00:20:27.283 16:12:57 -- nvmf/common.sh@693 -- # digest=1 00:20:27.283 16:12:57 -- nvmf/common.sh@694 -- # python - 00:20:27.283 16:12:57 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha256.KPt 00:20:27.283 16:12:57 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha256.KPt 00:20:27.283 16:12:57 -- host/auth.sh@83 -- # keys[2]=/tmp/spdk.key-sha256.KPt 00:20:27.283 16:12:57 -- host/auth.sh@84 -- # gen_key sha384 48 00:20:27.283 16:12:57 -- host/auth.sh@53 -- # local digest len file key 00:20:27.283 16:12:57 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:27.283 16:12:57 -- host/auth.sh@54 -- # local -A digests 00:20:27.283 16:12:57 -- host/auth.sh@56 -- # digest=sha384 00:20:27.283 16:12:57 -- host/auth.sh@56 -- # len=48 00:20:27.283 16:12:57 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:27.283 16:12:57 -- host/auth.sh@57 -- # key=1f9a17a6325d6bcd42158383c7e7ec3ade20aeaaa3bc6c90 00:20:27.283 16:12:57 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha384.XXX 00:20:27.283 16:12:57 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha384.Uh9 00:20:27.283 16:12:57 -- host/auth.sh@59 -- # format_dhchap_key 1f9a17a6325d6bcd42158383c7e7ec3ade20aeaaa3bc6c90 2 00:20:27.283 16:12:57 -- nvmf/common.sh@708 -- # format_key DHHC-1 1f9a17a6325d6bcd42158383c7e7ec3ade20aeaaa3bc6c90 2 00:20:27.283 16:12:57 -- nvmf/common.sh@691 -- # local prefix key digest 00:20:27.283 16:12:57 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:20:27.283 16:12:57 -- nvmf/common.sh@693 -- # key=1f9a17a6325d6bcd42158383c7e7ec3ade20aeaaa3bc6c90 00:20:27.283 16:12:57 -- nvmf/common.sh@693 -- # digest=2 00:20:27.283 16:12:57 -- nvmf/common.sh@694 -- # python - 00:20:27.542 16:12:57 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha384.Uh9 00:20:27.542 16:12:57 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha384.Uh9 00:20:27.542 16:12:57 -- host/auth.sh@84 -- # keys[3]=/tmp/spdk.key-sha384.Uh9 00:20:27.542 16:12:57 -- host/auth.sh@85 -- # gen_key sha512 64 00:20:27.542 16:12:57 -- host/auth.sh@53 -- # local digest len file key 00:20:27.542 16:12:57 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:27.542 16:12:57 -- host/auth.sh@54 -- # local -A digests 00:20:27.542 16:12:57 -- host/auth.sh@56 -- # digest=sha512 00:20:27.542 16:12:57 -- host/auth.sh@56 -- # len=64 00:20:27.542 16:12:57 -- host/auth.sh@57 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:27.542 16:12:57 -- host/auth.sh@57 -- # key=3165a56d351bcd8f9cd5b09458d245623d6af3ef4eeaac7738134b517d1c7817 00:20:27.542 16:12:57 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha512.XXX 00:20:27.542 16:12:57 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha512.rG6 00:20:27.542 16:12:57 -- host/auth.sh@59 -- # format_dhchap_key 3165a56d351bcd8f9cd5b09458d245623d6af3ef4eeaac7738134b517d1c7817 3 00:20:27.542 16:12:57 -- nvmf/common.sh@708 -- # format_key DHHC-1 3165a56d351bcd8f9cd5b09458d245623d6af3ef4eeaac7738134b517d1c7817 3 00:20:27.542 16:12:57 -- nvmf/common.sh@691 -- # local prefix key digest 00:20:27.542 16:12:57 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:20:27.542 16:12:57 -- nvmf/common.sh@693 -- # key=3165a56d351bcd8f9cd5b09458d245623d6af3ef4eeaac7738134b517d1c7817 00:20:27.542 16:12:57 -- nvmf/common.sh@693 -- # digest=3 00:20:27.542 16:12:57 -- nvmf/common.sh@694 -- # python - 00:20:27.542 16:12:57 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha512.rG6 00:20:27.542 16:12:57 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha512.rG6 00:20:27.542 16:12:57 -- host/auth.sh@85 -- # keys[4]=/tmp/spdk.key-sha512.rG6 00:20:27.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:27.542 16:12:57 -- host/auth.sh@87 -- # waitforlisten 89619 00:20:27.542 16:12:57 -- common/autotest_common.sh@817 -- # '[' -z 89619 ']' 00:20:27.542 16:12:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:27.542 16:12:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:27.542 16:12:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:27.542 16:12:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:27.542 16:12:57 -- common/autotest_common.sh@10 -- # set +x 00:20:27.800 16:12:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:27.800 16:12:57 -- common/autotest_common.sh@850 -- # return 0 00:20:27.800 16:12:57 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:20:27.800 16:12:57 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.lN8 00:20:27.800 16:12:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:27.800 16:12:57 -- common/autotest_common.sh@10 -- # set +x 00:20:27.800 16:12:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:27.800 16:12:57 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:20:27.800 16:12:57 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.E3D 00:20:27.800 16:12:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:27.800 16:12:57 -- common/autotest_common.sh@10 -- # set +x 00:20:27.800 16:12:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:27.800 16:12:57 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:20:27.800 16:12:57 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.KPt 00:20:27.800 16:12:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:27.800 16:12:57 -- common/autotest_common.sh@10 -- # set +x 00:20:27.800 16:12:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:27.800 16:12:57 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:20:27.800 16:12:57 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Uh9 00:20:27.800 16:12:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:27.800 16:12:57 -- common/autotest_common.sh@10 -- # set +x 00:20:27.800 16:12:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:27.800 16:12:57 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:20:27.800 16:12:57 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.rG6 00:20:27.800 16:12:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:27.800 16:12:57 -- common/autotest_common.sh@10 -- # set +x 00:20:27.800 16:12:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:27.800 16:12:57 -- host/auth.sh@92 -- # nvmet_auth_init 00:20:27.800 16:12:57 -- host/auth.sh@35 -- # get_main_ns_ip 00:20:27.800 16:12:57 -- nvmf/common.sh@717 -- # local ip 00:20:27.800 16:12:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:27.800 16:12:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:27.800 16:12:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:27.800 16:12:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:27.800 16:12:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:27.800 16:12:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:27.800 16:12:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:27.800 16:12:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:27.800 16:12:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:27.800 16:12:57 -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:20:27.800 16:12:57 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:20:27.800 16:12:57 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:20:27.800 16:12:57 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:27.800 16:12:57 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:27.800 16:12:57 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:27.800 16:12:57 -- nvmf/common.sh@628 -- # local block nvme 00:20:27.800 16:12:57 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:20:27.800 16:12:57 -- nvmf/common.sh@631 -- # modprobe nvmet 00:20:27.800 16:12:57 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:27.800 16:12:57 -- nvmf/common.sh@636 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:28.366 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:28.366 Waiting for block devices as requested 00:20:28.366 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:28.366 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:29.300 16:12:58 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:20:29.300 16:12:58 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:29.300 16:12:58 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:20:29.300 16:12:58 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:20:29.300 16:12:58 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:29.300 16:12:58 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:29.300 16:12:58 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:20:29.300 16:12:58 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:20:29.300 16:12:58 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:20:29.300 No valid GPT data, bailing 00:20:29.300 16:12:59 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:29.300 16:12:59 -- scripts/common.sh@391 -- # pt= 00:20:29.300 16:12:59 -- scripts/common.sh@392 -- # return 1 00:20:29.300 16:12:59 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:20:29.300 16:12:59 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:20:29.300 16:12:59 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n2 ]] 00:20:29.300 16:12:59 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n2 00:20:29.300 16:12:59 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:20:29.300 16:12:59 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:20:29.300 16:12:59 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:29.300 16:12:59 -- nvmf/common.sh@642 -- # block_in_use nvme0n2 00:20:29.300 16:12:59 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:20:29.300 16:12:59 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:20:29.300 No valid GPT data, bailing 00:20:29.301 16:12:59 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:20:29.301 16:12:59 -- scripts/common.sh@391 -- # pt= 00:20:29.301 16:12:59 -- scripts/common.sh@392 -- # return 1 00:20:29.301 16:12:59 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n2 00:20:29.301 16:12:59 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:20:29.301 16:12:59 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n3 ]] 00:20:29.301 16:12:59 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n3 00:20:29.301 16:12:59 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:20:29.301 16:12:59 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:20:29.301 16:12:59 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:29.301 16:12:59 -- nvmf/common.sh@642 -- # block_in_use nvme0n3 00:20:29.301 16:12:59 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:20:29.301 16:12:59 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:20:29.301 No valid GPT data, bailing 00:20:29.301 16:12:59 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:20:29.301 16:12:59 -- scripts/common.sh@391 -- # pt= 00:20:29.301 16:12:59 -- scripts/common.sh@392 -- # return 1 00:20:29.301 16:12:59 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n3 00:20:29.301 16:12:59 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:20:29.301 16:12:59 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:29.301 16:12:59 -- nvmf/common.sh@641 -- # is_block_zoned nvme1n1 00:20:29.301 16:12:59 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:20:29.301 16:12:59 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:29.301 16:12:59 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:29.301 16:12:59 -- nvmf/common.sh@642 -- # block_in_use nvme1n1 00:20:29.301 16:12:59 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:20:29.301 16:12:59 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:20:29.301 No valid GPT data, bailing 00:20:29.559 16:12:59 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:29.559 16:12:59 -- scripts/common.sh@391 -- # pt= 00:20:29.559 16:12:59 -- scripts/common.sh@392 -- # return 1 00:20:29.559 16:12:59 -- nvmf/common.sh@642 -- # nvme=/dev/nvme1n1 00:20:29.559 16:12:59 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme1n1 ]] 00:20:29.559 16:12:59 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:29.559 16:12:59 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:29.559 16:12:59 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:29.559 16:12:59 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:20:29.559 16:12:59 -- nvmf/common.sh@656 -- # echo 1 00:20:29.559 16:12:59 -- nvmf/common.sh@657 -- # echo /dev/nvme1n1 00:20:29.559 16:12:59 -- nvmf/common.sh@658 -- # echo 1 00:20:29.559 16:12:59 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:20:29.559 16:12:59 -- nvmf/common.sh@661 -- # echo tcp 00:20:29.559 16:12:59 -- nvmf/common.sh@662 -- # echo 4420 00:20:29.559 16:12:59 -- nvmf/common.sh@663 -- # echo ipv4 00:20:29.559 16:12:59 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:29.559 16:12:59 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5cf30adc-e0ee-4255-9853-178d7d30983a --hostid=5cf30adc-e0ee-4255-9853-178d7d30983a -a 10.0.0.1 -t tcp -s 4420 00:20:29.559 00:20:29.559 Discovery Log Number of Records 2, Generation counter 2 00:20:29.559 =====Discovery Log Entry 0====== 00:20:29.559 trtype: tcp 00:20:29.559 adrfam: ipv4 00:20:29.559 subtype: current discovery subsystem 00:20:29.559 treq: not specified, sq flow control disable supported 00:20:29.559 portid: 1 00:20:29.559 trsvcid: 4420 00:20:29.559 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:29.559 traddr: 10.0.0.1 00:20:29.559 eflags: none 00:20:29.559 sectype: none 00:20:29.559 =====Discovery Log Entry 1====== 00:20:29.559 trtype: tcp 00:20:29.559 adrfam: ipv4 00:20:29.559 subtype: nvme subsystem 00:20:29.559 treq: not specified, sq flow control disable supported 00:20:29.559 portid: 1 00:20:29.559 trsvcid: 4420 00:20:29.559 subnqn: nqn.2024-02.io.spdk:cnode0 00:20:29.559 traddr: 10.0.0.1 00:20:29.559 eflags: none 00:20:29.559 sectype: none 00:20:29.559 16:12:59 -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:29.559 16:12:59 -- host/auth.sh@37 -- # echo 0 00:20:29.559 16:12:59 -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:20:29.559 16:12:59 -- host/auth.sh@95 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:29.559 16:12:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:29.559 16:12:59 -- host/auth.sh@44 -- # digest=sha256 00:20:29.559 16:12:59 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:29.559 16:12:59 -- host/auth.sh@44 -- # keyid=1 00:20:29.559 16:12:59 -- host/auth.sh@45 -- # key=DHHC-1:00:YmJmZjA1NThmZjIwYWI5ZmIyZmM1OWRkYzQ5NzI3OGMwZGI4YWM3MzU4ODgwYzgz9M+1tw==: 00:20:29.559 16:12:59 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:29.559 16:12:59 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:29.559 16:12:59 -- host/auth.sh@49 -- # echo DHHC-1:00:YmJmZjA1NThmZjIwYWI5ZmIyZmM1OWRkYzQ5NzI3OGMwZGI4YWM3MzU4ODgwYzgz9M+1tw==: 00:20:29.559 16:12:59 -- host/auth.sh@100 -- # IFS=, 00:20:29.559 16:12:59 -- host/auth.sh@101 -- # printf %s sha256,sha384,sha512 00:20:29.559 16:12:59 -- host/auth.sh@100 -- # IFS=, 00:20:29.559 16:12:59 -- host/auth.sh@101 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:29.559 16:12:59 -- host/auth.sh@100 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:20:29.559 16:12:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:29.559 16:12:59 -- host/auth.sh@68 -- # digest=sha256,sha384,sha512 00:20:29.559 16:12:59 -- host/auth.sh@68 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:29.559 16:12:59 -- host/auth.sh@68 -- # keyid=1 00:20:29.559 16:12:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:29.559 16:12:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:29.559 16:12:59 -- common/autotest_common.sh@10 -- # set +x 00:20:29.559 16:12:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:29.559 16:12:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:29.559 16:12:59 -- nvmf/common.sh@717 -- # local ip 00:20:29.559 16:12:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:29.559 16:12:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:29.559 16:12:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:29.559 16:12:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:29.559 16:12:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:29.559 16:12:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:29.559 16:12:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:29.559 16:12:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:29.559 16:12:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:29.559 16:12:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:29.559 16:12:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:29.559 16:12:59 -- common/autotest_common.sh@10 -- # set +x 00:20:29.817 nvme0n1 00:20:29.817 16:12:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:29.817 16:12:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:29.817 16:12:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:29.817 16:12:59 -- common/autotest_common.sh@10 -- # set +x 00:20:29.817 16:12:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:29.817 16:12:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:29.817 16:12:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.817 16:12:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:29.817 16:12:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:29.817 16:12:59 -- common/autotest_common.sh@10 -- # set +x 00:20:29.817 16:12:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:29.817 16:12:59 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:20:29.817 16:12:59 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:29.817 16:12:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:29.817 16:12:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:20:29.817 16:12:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:29.817 16:12:59 -- host/auth.sh@44 -- # digest=sha256 00:20:29.817 16:12:59 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:29.817 16:12:59 -- host/auth.sh@44 -- # keyid=0 00:20:29.817 16:12:59 -- host/auth.sh@45 -- # key=DHHC-1:00:OWEyZjQyZjIwNjc4ODMyYWE1MzEyODBhM2Q1YzdjNjLzphD/: 00:20:29.817 16:12:59 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:29.817 16:12:59 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:29.817 16:12:59 -- host/auth.sh@49 -- # echo DHHC-1:00:OWEyZjQyZjIwNjc4ODMyYWE1MzEyODBhM2Q1YzdjNjLzphD/: 00:20:29.817 16:12:59 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 0 00:20:29.817 16:12:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:29.817 16:12:59 -- host/auth.sh@68 -- # digest=sha256 00:20:29.817 16:12:59 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:29.817 16:12:59 -- host/auth.sh@68 -- # keyid=0 00:20:29.817 16:12:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:29.817 16:12:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:29.817 16:12:59 -- common/autotest_common.sh@10 -- # set +x 00:20:29.817 16:12:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:29.817 16:12:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:29.817 16:12:59 -- nvmf/common.sh@717 -- # local ip 00:20:29.817 16:12:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:29.817 16:12:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:29.817 16:12:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:29.817 16:12:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:29.817 16:12:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:29.817 16:12:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:29.817 16:12:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:29.817 16:12:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:29.817 16:12:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:29.817 16:12:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:29.817 16:12:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:29.817 16:12:59 -- common/autotest_common.sh@10 -- # set +x 00:20:29.817 nvme0n1 00:20:29.817 16:12:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:29.817 16:12:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:29.817 16:12:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:29.817 16:12:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:29.817 16:12:59 -- common/autotest_common.sh@10 -- # set +x 00:20:29.817 16:12:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:30.075 16:12:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.075 16:12:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:30.075 16:12:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:30.075 16:12:59 -- common/autotest_common.sh@10 -- # set +x 00:20:30.075 16:12:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:30.075 16:12:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:30.075 16:12:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:30.075 16:12:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:30.075 16:12:59 -- host/auth.sh@44 -- # digest=sha256 00:20:30.075 16:12:59 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:30.075 16:12:59 -- host/auth.sh@44 -- # keyid=1 00:20:30.075 16:12:59 -- host/auth.sh@45 -- # key=DHHC-1:00:YmJmZjA1NThmZjIwYWI5ZmIyZmM1OWRkYzQ5NzI3OGMwZGI4YWM3MzU4ODgwYzgz9M+1tw==: 00:20:30.075 16:12:59 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:30.075 16:12:59 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:30.075 16:12:59 -- host/auth.sh@49 -- # echo DHHC-1:00:YmJmZjA1NThmZjIwYWI5ZmIyZmM1OWRkYzQ5NzI3OGMwZGI4YWM3MzU4ODgwYzgz9M+1tw==: 00:20:30.075 16:12:59 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 1 00:20:30.075 16:12:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:30.075 16:12:59 -- host/auth.sh@68 -- # digest=sha256 00:20:30.075 16:12:59 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:30.076 16:12:59 -- host/auth.sh@68 -- # keyid=1 00:20:30.076 16:12:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:30.076 16:12:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:30.076 16:12:59 -- common/autotest_common.sh@10 -- # set +x 00:20:30.076 16:12:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:30.076 16:12:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:30.076 16:12:59 -- nvmf/common.sh@717 -- # local ip 00:20:30.076 16:12:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:30.076 16:12:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:30.076 16:12:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:30.076 16:12:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:30.076 16:12:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:30.076 16:12:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:30.076 16:12:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:30.076 16:12:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:30.076 16:12:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:30.076 16:12:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:30.076 16:12:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:30.076 16:12:59 -- common/autotest_common.sh@10 -- # set +x 00:20:30.076 nvme0n1 00:20:30.076 16:12:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:30.076 16:12:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:30.076 16:12:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:30.076 16:12:59 -- common/autotest_common.sh@10 -- # set +x 00:20:30.076 16:12:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:30.076 16:12:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:30.076 16:12:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.076 16:12:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:30.076 16:12:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:30.076 16:12:59 -- common/autotest_common.sh@10 -- # set +x 00:20:30.076 16:12:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:30.076 16:12:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:30.076 16:12:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:20:30.076 16:12:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:30.076 16:12:59 -- host/auth.sh@44 -- # digest=sha256 00:20:30.076 16:12:59 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:30.076 16:12:59 -- host/auth.sh@44 -- # keyid=2 00:20:30.076 16:12:59 -- host/auth.sh@45 -- # key=DHHC-1:01:YTRhOTBhNzU5Zjg3MzZmMGVlMDNiNTZjYWY0ODlkMjN7NoLX: 00:20:30.076 16:12:59 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:30.076 16:12:59 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:30.076 16:12:59 -- host/auth.sh@49 -- # echo DHHC-1:01:YTRhOTBhNzU5Zjg3MzZmMGVlMDNiNTZjYWY0ODlkMjN7NoLX: 00:20:30.076 16:12:59 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 2 00:20:30.076 16:12:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:30.076 16:12:59 -- host/auth.sh@68 -- # digest=sha256 00:20:30.076 16:12:59 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:30.076 16:12:59 -- host/auth.sh@68 -- # keyid=2 00:20:30.076 16:12:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:30.076 16:12:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:30.076 16:12:59 -- common/autotest_common.sh@10 -- # set +x 00:20:30.076 16:12:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:30.076 16:12:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:30.076 16:12:59 -- nvmf/common.sh@717 -- # local ip 00:20:30.076 16:12:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:30.076 16:12:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:30.076 16:12:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:30.076 16:12:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:30.076 16:12:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:30.076 16:12:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:30.076 16:12:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:30.076 16:12:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:30.076 16:12:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:30.076 16:12:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:30.076 16:12:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:30.076 16:12:59 -- common/autotest_common.sh@10 -- # set +x 00:20:30.334 nvme0n1 00:20:30.334 16:13:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:30.334 16:13:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:30.334 16:13:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:30.334 16:13:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:30.334 16:13:00 -- common/autotest_common.sh@10 -- # set +x 00:20:30.334 16:13:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:30.334 16:13:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.334 16:13:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:30.334 16:13:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:30.334 16:13:00 -- common/autotest_common.sh@10 -- # set +x 00:20:30.334 16:13:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:30.334 16:13:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:30.334 16:13:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:20:30.334 16:13:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:30.334 16:13:00 -- host/auth.sh@44 -- # digest=sha256 00:20:30.334 16:13:00 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:30.334 16:13:00 -- host/auth.sh@44 -- # keyid=3 00:20:30.334 16:13:00 -- host/auth.sh@45 -- # key=DHHC-1:02:MWY5YTE3YTYzMjVkNmJjZDQyMTU4MzgzYzdlN2VjM2FkZTIwYWVhYWEzYmM2YzkwePeV2g==: 00:20:30.334 16:13:00 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:30.334 16:13:00 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:30.334 16:13:00 -- host/auth.sh@49 -- # echo DHHC-1:02:MWY5YTE3YTYzMjVkNmJjZDQyMTU4MzgzYzdlN2VjM2FkZTIwYWVhYWEzYmM2YzkwePeV2g==: 00:20:30.334 16:13:00 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 3 00:20:30.334 16:13:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:30.334 16:13:00 -- host/auth.sh@68 -- # digest=sha256 00:20:30.334 16:13:00 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:30.334 16:13:00 -- host/auth.sh@68 -- # keyid=3 00:20:30.334 16:13:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:30.334 16:13:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:30.334 16:13:00 -- common/autotest_common.sh@10 -- # set +x 00:20:30.334 16:13:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:30.334 16:13:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:30.334 16:13:00 -- nvmf/common.sh@717 -- # local ip 00:20:30.334 16:13:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:30.334 16:13:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:30.334 16:13:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:30.334 16:13:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:30.334 16:13:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:30.334 16:13:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:30.334 16:13:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:30.334 16:13:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:30.334 16:13:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:30.334 16:13:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:30.334 16:13:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:30.334 16:13:00 -- common/autotest_common.sh@10 -- # set +x 00:20:30.334 nvme0n1 00:20:30.334 16:13:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:30.334 16:13:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:30.334 16:13:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:30.334 16:13:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:30.334 16:13:00 -- common/autotest_common.sh@10 -- # set +x 00:20:30.334 16:13:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:30.334 16:13:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.334 16:13:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:30.334 16:13:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:30.334 16:13:00 -- common/autotest_common.sh@10 -- # set +x 00:20:30.334 16:13:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:30.334 16:13:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:30.334 16:13:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:20:30.334 16:13:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:30.334 16:13:00 -- host/auth.sh@44 -- # digest=sha256 00:20:30.334 16:13:00 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:30.335 16:13:00 -- host/auth.sh@44 -- # keyid=4 00:20:30.335 16:13:00 -- host/auth.sh@45 -- # key=DHHC-1:03:MzE2NWE1NmQzNTFiY2Q4ZjljZDViMDk0NThkMjQ1NjIzZDZhZjNlZjRlZWFhYzc3MzgxMzRiNTE3ZDFjNzgxN9CT5iI=: 00:20:30.335 16:13:00 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:30.335 16:13:00 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:30.335 16:13:00 -- host/auth.sh@49 -- # echo DHHC-1:03:MzE2NWE1NmQzNTFiY2Q4ZjljZDViMDk0NThkMjQ1NjIzZDZhZjNlZjRlZWFhYzc3MzgxMzRiNTE3ZDFjNzgxN9CT5iI=: 00:20:30.335 16:13:00 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 4 00:20:30.335 16:13:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:30.335 16:13:00 -- host/auth.sh@68 -- # digest=sha256 00:20:30.335 16:13:00 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:30.335 16:13:00 -- host/auth.sh@68 -- # keyid=4 00:20:30.335 16:13:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:30.335 16:13:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:30.335 16:13:00 -- common/autotest_common.sh@10 -- # set +x 00:20:30.592 16:13:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:30.592 16:13:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:30.592 16:13:00 -- nvmf/common.sh@717 -- # local ip 00:20:30.592 16:13:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:30.592 16:13:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:30.592 16:13:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:30.592 16:13:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:30.592 16:13:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:30.592 16:13:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:30.592 16:13:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:30.592 16:13:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:30.592 16:13:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:30.592 16:13:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:30.592 16:13:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:30.592 16:13:00 -- common/autotest_common.sh@10 -- # set +x 00:20:30.592 nvme0n1 00:20:30.592 16:13:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:30.592 16:13:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:30.592 16:13:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:30.592 16:13:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:30.592 16:13:00 -- common/autotest_common.sh@10 -- # set +x 00:20:30.592 16:13:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:30.592 16:13:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.592 16:13:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:30.592 16:13:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:30.592 16:13:00 -- common/autotest_common.sh@10 -- # set +x 00:20:30.592 16:13:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:30.592 16:13:00 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:30.592 16:13:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:30.592 16:13:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:20:30.592 16:13:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:30.592 16:13:00 -- host/auth.sh@44 -- # digest=sha256 00:20:30.592 16:13:00 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:30.592 16:13:00 -- host/auth.sh@44 -- # keyid=0 00:20:30.592 16:13:00 -- host/auth.sh@45 -- # key=DHHC-1:00:OWEyZjQyZjIwNjc4ODMyYWE1MzEyODBhM2Q1YzdjNjLzphD/: 00:20:30.592 16:13:00 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:30.592 16:13:00 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:30.849 16:13:00 -- host/auth.sh@49 -- # echo DHHC-1:00:OWEyZjQyZjIwNjc4ODMyYWE1MzEyODBhM2Q1YzdjNjLzphD/: 00:20:30.850 16:13:00 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 0 00:20:30.850 16:13:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:30.850 16:13:00 -- host/auth.sh@68 -- # digest=sha256 00:20:30.850 16:13:00 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:30.850 16:13:00 -- host/auth.sh@68 -- # keyid=0 00:20:30.850 16:13:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:30.850 16:13:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:30.850 16:13:00 -- common/autotest_common.sh@10 -- # set +x 00:20:30.850 16:13:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:30.850 16:13:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:30.850 16:13:00 -- nvmf/common.sh@717 -- # local ip 00:20:30.850 16:13:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:30.850 16:13:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:30.850 16:13:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:30.850 16:13:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:30.850 16:13:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:30.850 16:13:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:30.850 16:13:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:30.850 16:13:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:30.850 16:13:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:30.850 16:13:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:30.850 16:13:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:30.850 16:13:00 -- common/autotest_common.sh@10 -- # set +x 00:20:31.119 nvme0n1 00:20:31.119 16:13:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:31.119 16:13:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:31.119 16:13:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:31.119 16:13:00 -- common/autotest_common.sh@10 -- # set +x 00:20:31.119 16:13:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:31.119 16:13:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:31.119 16:13:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.119 16:13:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:31.119 16:13:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:31.119 16:13:00 -- common/autotest_common.sh@10 -- # set +x 00:20:31.119 16:13:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:31.119 16:13:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:31.119 16:13:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:20:31.119 16:13:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:31.119 16:13:00 -- host/auth.sh@44 -- # digest=sha256 00:20:31.119 16:13:00 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:31.119 16:13:00 -- host/auth.sh@44 -- # keyid=1 00:20:31.119 16:13:00 -- host/auth.sh@45 -- # key=DHHC-1:00:YmJmZjA1NThmZjIwYWI5ZmIyZmM1OWRkYzQ5NzI3OGMwZGI4YWM3MzU4ODgwYzgz9M+1tw==: 00:20:31.119 16:13:00 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:31.119 16:13:00 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:31.119 16:13:00 -- host/auth.sh@49 -- # echo DHHC-1:00:YmJmZjA1NThmZjIwYWI5ZmIyZmM1OWRkYzQ5NzI3OGMwZGI4YWM3MzU4ODgwYzgz9M+1tw==: 00:20:31.119 16:13:00 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 1 00:20:31.119 16:13:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:31.119 16:13:00 -- host/auth.sh@68 -- # digest=sha256 00:20:31.119 16:13:00 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:31.119 16:13:00 -- host/auth.sh@68 -- # keyid=1 00:20:31.119 16:13:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:31.119 16:13:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:31.119 16:13:00 -- common/autotest_common.sh@10 -- # set +x 00:20:31.119 16:13:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:31.119 16:13:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:31.119 16:13:00 -- nvmf/common.sh@717 -- # local ip 00:20:31.119 16:13:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:31.119 16:13:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:31.119 16:13:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:31.119 16:13:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:31.119 16:13:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:31.119 16:13:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:31.119 16:13:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:31.119 16:13:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:31.119 16:13:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:31.119 16:13:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:31.119 16:13:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:31.119 16:13:00 -- common/autotest_common.sh@10 -- # set +x 00:20:31.405 nvme0n1 00:20:31.405 16:13:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:31.405 16:13:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:31.405 16:13:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:31.406 16:13:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:31.406 16:13:01 -- common/autotest_common.sh@10 -- # set +x 00:20:31.406 16:13:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:31.406 16:13:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.406 16:13:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:31.406 16:13:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:31.406 16:13:01 -- common/autotest_common.sh@10 -- # set +x 00:20:31.406 16:13:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:31.406 16:13:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:31.406 16:13:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:20:31.406 16:13:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:31.406 16:13:01 -- host/auth.sh@44 -- # digest=sha256 00:20:31.406 16:13:01 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:31.406 16:13:01 -- host/auth.sh@44 -- # keyid=2 00:20:31.406 16:13:01 -- host/auth.sh@45 -- # key=DHHC-1:01:YTRhOTBhNzU5Zjg3MzZmMGVlMDNiNTZjYWY0ODlkMjN7NoLX: 00:20:31.406 16:13:01 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:31.406 16:13:01 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:31.406 16:13:01 -- host/auth.sh@49 -- # echo DHHC-1:01:YTRhOTBhNzU5Zjg3MzZmMGVlMDNiNTZjYWY0ODlkMjN7NoLX: 00:20:31.406 16:13:01 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 2 00:20:31.406 16:13:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:31.406 16:13:01 -- host/auth.sh@68 -- # digest=sha256 00:20:31.406 16:13:01 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:31.406 16:13:01 -- host/auth.sh@68 -- # keyid=2 00:20:31.406 16:13:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:31.406 16:13:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:31.406 16:13:01 -- common/autotest_common.sh@10 -- # set +x 00:20:31.406 16:13:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:31.406 16:13:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:31.406 16:13:01 -- nvmf/common.sh@717 -- # local ip 00:20:31.406 16:13:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:31.406 16:13:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:31.406 16:13:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:31.406 16:13:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:31.406 16:13:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:31.406 16:13:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:31.406 16:13:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:31.406 16:13:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:31.406 16:13:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:31.406 16:13:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:31.406 16:13:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:31.406 16:13:01 -- common/autotest_common.sh@10 -- # set +x 00:20:31.406 nvme0n1 00:20:31.406 16:13:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:31.406 16:13:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:31.406 16:13:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:31.406 16:13:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:31.406 16:13:01 -- common/autotest_common.sh@10 -- # set +x 00:20:31.406 16:13:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:31.406 16:13:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.406 16:13:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:31.406 16:13:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:31.406 16:13:01 -- common/autotest_common.sh@10 -- # set +x 00:20:31.406 16:13:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:31.406 16:13:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:31.406 16:13:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:20:31.406 16:13:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:31.406 16:13:01 -- host/auth.sh@44 -- # digest=sha256 00:20:31.406 16:13:01 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:31.406 16:13:01 -- host/auth.sh@44 -- # keyid=3 00:20:31.406 16:13:01 -- host/auth.sh@45 -- # key=DHHC-1:02:MWY5YTE3YTYzMjVkNmJjZDQyMTU4MzgzYzdlN2VjM2FkZTIwYWVhYWEzYmM2YzkwePeV2g==: 00:20:31.406 16:13:01 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:31.406 16:13:01 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:31.406 16:13:01 -- host/auth.sh@49 -- # echo DHHC-1:02:MWY5YTE3YTYzMjVkNmJjZDQyMTU4MzgzYzdlN2VjM2FkZTIwYWVhYWEzYmM2YzkwePeV2g==: 00:20:31.406 16:13:01 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 3 00:20:31.406 16:13:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:31.406 16:13:01 -- host/auth.sh@68 -- # digest=sha256 00:20:31.406 16:13:01 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:31.406 16:13:01 -- host/auth.sh@68 -- # keyid=3 00:20:31.406 16:13:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:31.406 16:13:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:31.406 16:13:01 -- common/autotest_common.sh@10 -- # set +x 00:20:31.406 16:13:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:31.406 16:13:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:31.406 16:13:01 -- nvmf/common.sh@717 -- # local ip 00:20:31.406 16:13:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:31.406 16:13:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:31.406 16:13:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:31.406 16:13:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:31.406 16:13:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:31.406 16:13:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:31.406 16:13:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:31.406 16:13:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:31.406 16:13:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:31.406 16:13:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:31.406 16:13:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:31.406 16:13:01 -- common/autotest_common.sh@10 -- # set +x 00:20:31.665 nvme0n1 00:20:31.665 16:13:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:31.665 16:13:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:31.665 16:13:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:31.665 16:13:01 -- common/autotest_common.sh@10 -- # set +x 00:20:31.665 16:13:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:31.665 16:13:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:31.665 16:13:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.665 16:13:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:31.665 16:13:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:31.665 16:13:01 -- common/autotest_common.sh@10 -- # set +x 00:20:31.665 16:13:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:31.665 16:13:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:31.665 16:13:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:20:31.665 16:13:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:31.665 16:13:01 -- host/auth.sh@44 -- # digest=sha256 00:20:31.665 16:13:01 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:31.665 16:13:01 -- host/auth.sh@44 -- # keyid=4 00:20:31.665 16:13:01 -- host/auth.sh@45 -- # key=DHHC-1:03:MzE2NWE1NmQzNTFiY2Q4ZjljZDViMDk0NThkMjQ1NjIzZDZhZjNlZjRlZWFhYzc3MzgxMzRiNTE3ZDFjNzgxN9CT5iI=: 00:20:31.665 16:13:01 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:31.665 16:13:01 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:31.665 16:13:01 -- host/auth.sh@49 -- # echo DHHC-1:03:MzE2NWE1NmQzNTFiY2Q4ZjljZDViMDk0NThkMjQ1NjIzZDZhZjNlZjRlZWFhYzc3MzgxMzRiNTE3ZDFjNzgxN9CT5iI=: 00:20:31.665 16:13:01 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 4 00:20:31.665 16:13:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:31.665 16:13:01 -- host/auth.sh@68 -- # digest=sha256 00:20:31.665 16:13:01 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:31.665 16:13:01 -- host/auth.sh@68 -- # keyid=4 00:20:31.665 16:13:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:31.665 16:13:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:31.665 16:13:01 -- common/autotest_common.sh@10 -- # set +x 00:20:31.665 16:13:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:31.665 16:13:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:31.665 16:13:01 -- nvmf/common.sh@717 -- # local ip 00:20:31.665 16:13:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:31.665 16:13:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:31.665 16:13:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:31.665 16:13:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:31.665 16:13:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:31.665 16:13:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:31.665 16:13:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:31.665 16:13:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:31.665 16:13:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:31.666 16:13:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:31.666 16:13:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:31.666 16:13:01 -- common/autotest_common.sh@10 -- # set +x 00:20:31.924 nvme0n1 00:20:31.924 16:13:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:31.924 16:13:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:31.924 16:13:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:31.924 16:13:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:31.924 16:13:01 -- common/autotest_common.sh@10 -- # set +x 00:20:31.924 16:13:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:31.924 16:13:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.924 16:13:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:31.924 16:13:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:31.924 16:13:01 -- common/autotest_common.sh@10 -- # set +x 00:20:31.924 16:13:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:31.925 16:13:01 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:31.925 16:13:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:31.925 16:13:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:20:31.925 16:13:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:31.925 16:13:01 -- host/auth.sh@44 -- # digest=sha256 00:20:31.925 16:13:01 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:31.925 16:13:01 -- host/auth.sh@44 -- # keyid=0 00:20:31.925 16:13:01 -- host/auth.sh@45 -- # key=DHHC-1:00:OWEyZjQyZjIwNjc4ODMyYWE1MzEyODBhM2Q1YzdjNjLzphD/: 00:20:31.925 16:13:01 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:31.925 16:13:01 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:32.492 16:13:02 -- host/auth.sh@49 -- # echo DHHC-1:00:OWEyZjQyZjIwNjc4ODMyYWE1MzEyODBhM2Q1YzdjNjLzphD/: 00:20:32.492 16:13:02 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 0 00:20:32.492 16:13:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:32.492 16:13:02 -- host/auth.sh@68 -- # digest=sha256 00:20:32.492 16:13:02 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:32.492 16:13:02 -- host/auth.sh@68 -- # keyid=0 00:20:32.492 16:13:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:32.492 16:13:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:32.492 16:13:02 -- common/autotest_common.sh@10 -- # set +x 00:20:32.492 16:13:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:32.492 16:13:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:32.492 16:13:02 -- nvmf/common.sh@717 -- # local ip 00:20:32.492 16:13:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:32.492 16:13:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:32.492 16:13:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:32.492 16:13:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:32.492 16:13:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:32.492 16:13:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:32.492 16:13:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:32.492 16:13:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:32.492 16:13:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:32.492 16:13:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:32.492 16:13:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:32.492 16:13:02 -- common/autotest_common.sh@10 -- # set +x 00:20:32.749 nvme0n1 00:20:32.750 16:13:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:32.750 16:13:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:32.750 16:13:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:32.750 16:13:02 -- common/autotest_common.sh@10 -- # set +x 00:20:32.750 16:13:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:32.750 16:13:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:32.750 16:13:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.750 16:13:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:32.750 16:13:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:32.750 16:13:02 -- common/autotest_common.sh@10 -- # set +x 00:20:32.750 16:13:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:32.750 16:13:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:32.750 16:13:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:20:32.750 16:13:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:32.750 16:13:02 -- host/auth.sh@44 -- # digest=sha256 00:20:32.750 16:13:02 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:32.750 16:13:02 -- host/auth.sh@44 -- # keyid=1 00:20:32.750 16:13:02 -- host/auth.sh@45 -- # key=DHHC-1:00:YmJmZjA1NThmZjIwYWI5ZmIyZmM1OWRkYzQ5NzI3OGMwZGI4YWM3MzU4ODgwYzgz9M+1tw==: 00:20:32.750 16:13:02 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:32.750 16:13:02 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:32.750 16:13:02 -- host/auth.sh@49 -- # echo DHHC-1:00:YmJmZjA1NThmZjIwYWI5ZmIyZmM1OWRkYzQ5NzI3OGMwZGI4YWM3MzU4ODgwYzgz9M+1tw==: 00:20:32.750 16:13:02 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 1 00:20:32.750 16:13:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:32.750 16:13:02 -- host/auth.sh@68 -- # digest=sha256 00:20:32.750 16:13:02 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:32.750 16:13:02 -- host/auth.sh@68 -- # keyid=1 00:20:32.750 16:13:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:32.750 16:13:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:32.750 16:13:02 -- common/autotest_common.sh@10 -- # set +x 00:20:32.750 16:13:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:32.750 16:13:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:32.750 16:13:02 -- nvmf/common.sh@717 -- # local ip 00:20:32.750 16:13:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:32.750 16:13:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:32.750 16:13:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:32.750 16:13:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:32.750 16:13:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:32.750 16:13:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:32.750 16:13:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:32.750 16:13:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:32.750 16:13:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:32.750 16:13:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:32.750 16:13:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:32.750 16:13:02 -- common/autotest_common.sh@10 -- # set +x 00:20:33.008 nvme0n1 00:20:33.008 16:13:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:33.008 16:13:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:33.008 16:13:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:33.008 16:13:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:33.008 16:13:02 -- common/autotest_common.sh@10 -- # set +x 00:20:33.008 16:13:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:33.008 16:13:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.008 16:13:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:33.008 16:13:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:33.008 16:13:02 -- common/autotest_common.sh@10 -- # set +x 00:20:33.008 16:13:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:33.008 16:13:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:33.008 16:13:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:20:33.008 16:13:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:33.008 16:13:02 -- host/auth.sh@44 -- # digest=sha256 00:20:33.008 16:13:02 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:33.008 16:13:02 -- host/auth.sh@44 -- # keyid=2 00:20:33.008 16:13:02 -- host/auth.sh@45 -- # key=DHHC-1:01:YTRhOTBhNzU5Zjg3MzZmMGVlMDNiNTZjYWY0ODlkMjN7NoLX: 00:20:33.008 16:13:02 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:33.008 16:13:02 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:33.008 16:13:02 -- host/auth.sh@49 -- # echo DHHC-1:01:YTRhOTBhNzU5Zjg3MzZmMGVlMDNiNTZjYWY0ODlkMjN7NoLX: 00:20:33.008 16:13:02 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 2 00:20:33.008 16:13:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:33.008 16:13:02 -- host/auth.sh@68 -- # digest=sha256 00:20:33.008 16:13:02 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:33.008 16:13:02 -- host/auth.sh@68 -- # keyid=2 00:20:33.008 16:13:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:33.008 16:13:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:33.008 16:13:02 -- common/autotest_common.sh@10 -- # set +x 00:20:33.008 16:13:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:33.008 16:13:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:33.008 16:13:02 -- nvmf/common.sh@717 -- # local ip 00:20:33.008 16:13:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:33.008 16:13:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:33.008 16:13:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:33.008 16:13:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:33.008 16:13:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:33.008 16:13:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:33.008 16:13:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:33.008 16:13:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:33.008 16:13:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:33.008 16:13:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:33.008 16:13:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:33.008 16:13:02 -- common/autotest_common.sh@10 -- # set +x 00:20:33.267 nvme0n1 00:20:33.267 16:13:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:33.267 16:13:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:33.267 16:13:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:33.267 16:13:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:33.267 16:13:03 -- common/autotest_common.sh@10 -- # set +x 00:20:33.267 16:13:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:33.267 16:13:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.267 16:13:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:33.267 16:13:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:33.267 16:13:03 -- common/autotest_common.sh@10 -- # set +x 00:20:33.267 16:13:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:33.267 16:13:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:33.267 16:13:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:20:33.267 16:13:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:33.267 16:13:03 -- host/auth.sh@44 -- # digest=sha256 00:20:33.267 16:13:03 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:33.267 16:13:03 -- host/auth.sh@44 -- # keyid=3 00:20:33.267 16:13:03 -- host/auth.sh@45 -- # key=DHHC-1:02:MWY5YTE3YTYzMjVkNmJjZDQyMTU4MzgzYzdlN2VjM2FkZTIwYWVhYWEzYmM2YzkwePeV2g==: 00:20:33.267 16:13:03 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:33.267 16:13:03 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:33.267 16:13:03 -- host/auth.sh@49 -- # echo DHHC-1:02:MWY5YTE3YTYzMjVkNmJjZDQyMTU4MzgzYzdlN2VjM2FkZTIwYWVhYWEzYmM2YzkwePeV2g==: 00:20:33.267 16:13:03 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 3 00:20:33.267 16:13:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:33.267 16:13:03 -- host/auth.sh@68 -- # digest=sha256 00:20:33.267 16:13:03 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:33.267 16:13:03 -- host/auth.sh@68 -- # keyid=3 00:20:33.267 16:13:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:33.267 16:13:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:33.267 16:13:03 -- common/autotest_common.sh@10 -- # set +x 00:20:33.267 16:13:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:33.267 16:13:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:33.267 16:13:03 -- nvmf/common.sh@717 -- # local ip 00:20:33.267 16:13:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:33.267 16:13:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:33.267 16:13:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:33.267 16:13:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:33.267 16:13:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:33.267 16:13:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:33.267 16:13:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:33.267 16:13:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:33.267 16:13:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:33.267 16:13:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:33.267 16:13:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:33.267 16:13:03 -- common/autotest_common.sh@10 -- # set +x 00:20:33.525 nvme0n1 00:20:33.525 16:13:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:33.525 16:13:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:33.525 16:13:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:33.525 16:13:03 -- common/autotest_common.sh@10 -- # set +x 00:20:33.525 16:13:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:33.525 16:13:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:33.525 16:13:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.525 16:13:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:33.525 16:13:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:33.525 16:13:03 -- common/autotest_common.sh@10 -- # set +x 00:20:33.525 16:13:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:33.525 16:13:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:33.525 16:13:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:20:33.525 16:13:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:33.525 16:13:03 -- host/auth.sh@44 -- # digest=sha256 00:20:33.525 16:13:03 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:33.525 16:13:03 -- host/auth.sh@44 -- # keyid=4 00:20:33.525 16:13:03 -- host/auth.sh@45 -- # key=DHHC-1:03:MzE2NWE1NmQzNTFiY2Q4ZjljZDViMDk0NThkMjQ1NjIzZDZhZjNlZjRlZWFhYzc3MzgxMzRiNTE3ZDFjNzgxN9CT5iI=: 00:20:33.525 16:13:03 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:33.525 16:13:03 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:33.525 16:13:03 -- host/auth.sh@49 -- # echo DHHC-1:03:MzE2NWE1NmQzNTFiY2Q4ZjljZDViMDk0NThkMjQ1NjIzZDZhZjNlZjRlZWFhYzc3MzgxMzRiNTE3ZDFjNzgxN9CT5iI=: 00:20:33.525 16:13:03 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 4 00:20:33.525 16:13:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:33.525 16:13:03 -- host/auth.sh@68 -- # digest=sha256 00:20:33.525 16:13:03 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:33.525 16:13:03 -- host/auth.sh@68 -- # keyid=4 00:20:33.525 16:13:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:33.525 16:13:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:33.525 16:13:03 -- common/autotest_common.sh@10 -- # set +x 00:20:33.525 16:13:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:33.525 16:13:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:33.525 16:13:03 -- nvmf/common.sh@717 -- # local ip 00:20:33.525 16:13:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:33.525 16:13:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:33.525 16:13:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:33.525 16:13:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:33.525 16:13:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:33.525 16:13:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:33.525 16:13:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:33.525 16:13:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:33.525 16:13:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:33.525 16:13:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:33.525 16:13:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:33.525 16:13:03 -- common/autotest_common.sh@10 -- # set +x 00:20:33.782 nvme0n1 00:20:33.782 16:13:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:33.782 16:13:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:33.782 16:13:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:33.782 16:13:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:33.782 16:13:03 -- common/autotest_common.sh@10 -- # set +x 00:20:33.782 16:13:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:33.782 16:13:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.782 16:13:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:33.782 16:13:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:33.782 16:13:03 -- common/autotest_common.sh@10 -- # set +x 00:20:33.782 16:13:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:33.782 16:13:03 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:33.782 16:13:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:33.782 16:13:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:20:33.782 16:13:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:33.782 16:13:03 -- host/auth.sh@44 -- # digest=sha256 00:20:33.782 16:13:03 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:33.782 16:13:03 -- host/auth.sh@44 -- # keyid=0 00:20:33.782 16:13:03 -- host/auth.sh@45 -- # key=DHHC-1:00:OWEyZjQyZjIwNjc4ODMyYWE1MzEyODBhM2Q1YzdjNjLzphD/: 00:20:33.782 16:13:03 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:33.782 16:13:03 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:35.683 16:13:05 -- host/auth.sh@49 -- # echo DHHC-1:00:OWEyZjQyZjIwNjc4ODMyYWE1MzEyODBhM2Q1YzdjNjLzphD/: 00:20:35.683 16:13:05 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 0 00:20:35.683 16:13:05 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:35.683 16:13:05 -- host/auth.sh@68 -- # digest=sha256 00:20:35.683 16:13:05 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:35.683 16:13:05 -- host/auth.sh@68 -- # keyid=0 00:20:35.683 16:13:05 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:35.683 16:13:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.683 16:13:05 -- common/autotest_common.sh@10 -- # set +x 00:20:35.683 16:13:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.683 16:13:05 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:35.683 16:13:05 -- nvmf/common.sh@717 -- # local ip 00:20:35.683 16:13:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:35.683 16:13:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:35.683 16:13:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:35.683 16:13:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:35.683 16:13:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:35.683 16:13:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:35.683 16:13:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:35.683 16:13:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:35.683 16:13:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:35.683 16:13:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:35.683 16:13:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.683 16:13:05 -- common/autotest_common.sh@10 -- # set +x 00:20:35.683 nvme0n1 00:20:35.683 16:13:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.683 16:13:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:35.683 16:13:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.683 16:13:05 -- common/autotest_common.sh@10 -- # set +x 00:20:35.683 16:13:05 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:35.683 16:13:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.683 16:13:05 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.683 16:13:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:35.683 16:13:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.683 16:13:05 -- common/autotest_common.sh@10 -- # set +x 00:20:35.683 16:13:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.683 16:13:05 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:35.683 16:13:05 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:20:35.683 16:13:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:35.683 16:13:05 -- host/auth.sh@44 -- # digest=sha256 00:20:35.683 16:13:05 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:35.683 16:13:05 -- host/auth.sh@44 -- # keyid=1 00:20:35.683 16:13:05 -- host/auth.sh@45 -- # key=DHHC-1:00:YmJmZjA1NThmZjIwYWI5ZmIyZmM1OWRkYzQ5NzI3OGMwZGI4YWM3MzU4ODgwYzgz9M+1tw==: 00:20:35.683 16:13:05 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:35.683 16:13:05 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:35.683 16:13:05 -- host/auth.sh@49 -- # echo DHHC-1:00:YmJmZjA1NThmZjIwYWI5ZmIyZmM1OWRkYzQ5NzI3OGMwZGI4YWM3MzU4ODgwYzgz9M+1tw==: 00:20:35.683 16:13:05 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 1 00:20:35.683 16:13:05 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:35.683 16:13:05 -- host/auth.sh@68 -- # digest=sha256 00:20:35.683 16:13:05 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:35.683 16:13:05 -- host/auth.sh@68 -- # keyid=1 00:20:35.683 16:13:05 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:35.683 16:13:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.683 16:13:05 -- common/autotest_common.sh@10 -- # set +x 00:20:35.683 16:13:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.683 16:13:05 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:35.683 16:13:05 -- nvmf/common.sh@717 -- # local ip 00:20:35.683 16:13:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:35.683 16:13:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:35.683 16:13:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:35.683 16:13:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:35.683 16:13:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:35.683 16:13:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:35.683 16:13:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:35.683 16:13:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:35.683 16:13:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:35.683 16:13:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:35.683 16:13:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.683 16:13:05 -- common/autotest_common.sh@10 -- # set +x 00:20:35.941 nvme0n1 00:20:35.941 16:13:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.941 16:13:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:35.941 16:13:05 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:35.941 16:13:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.941 16:13:05 -- common/autotest_common.sh@10 -- # set +x 00:20:35.941 16:13:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.198 16:13:05 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.198 16:13:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:36.198 16:13:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.198 16:13:05 -- common/autotest_common.sh@10 -- # set +x 00:20:36.198 16:13:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.198 16:13:05 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:36.198 16:13:05 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:20:36.198 16:13:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:36.198 16:13:05 -- host/auth.sh@44 -- # digest=sha256 00:20:36.198 16:13:05 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:36.198 16:13:05 -- host/auth.sh@44 -- # keyid=2 00:20:36.198 16:13:05 -- host/auth.sh@45 -- # key=DHHC-1:01:YTRhOTBhNzU5Zjg3MzZmMGVlMDNiNTZjYWY0ODlkMjN7NoLX: 00:20:36.198 16:13:05 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:36.198 16:13:05 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:36.198 16:13:05 -- host/auth.sh@49 -- # echo DHHC-1:01:YTRhOTBhNzU5Zjg3MzZmMGVlMDNiNTZjYWY0ODlkMjN7NoLX: 00:20:36.198 16:13:05 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 2 00:20:36.198 16:13:05 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:36.198 16:13:05 -- host/auth.sh@68 -- # digest=sha256 00:20:36.198 16:13:05 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:36.198 16:13:05 -- host/auth.sh@68 -- # keyid=2 00:20:36.198 16:13:05 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:36.198 16:13:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.198 16:13:05 -- common/autotest_common.sh@10 -- # set +x 00:20:36.198 16:13:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.198 16:13:05 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:36.198 16:13:05 -- nvmf/common.sh@717 -- # local ip 00:20:36.198 16:13:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:36.198 16:13:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:36.198 16:13:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:36.198 16:13:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:36.198 16:13:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:36.198 16:13:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:36.198 16:13:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:36.198 16:13:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:36.198 16:13:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:36.198 16:13:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:36.198 16:13:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.198 16:13:05 -- common/autotest_common.sh@10 -- # set +x 00:20:36.456 nvme0n1 00:20:36.456 16:13:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.456 16:13:06 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:36.456 16:13:06 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:36.456 16:13:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.456 16:13:06 -- common/autotest_common.sh@10 -- # set +x 00:20:36.456 16:13:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.456 16:13:06 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.456 16:13:06 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:36.456 16:13:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.456 16:13:06 -- common/autotest_common.sh@10 -- # set +x 00:20:36.456 16:13:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.456 16:13:06 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:36.456 16:13:06 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:20:36.456 16:13:06 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:36.456 16:13:06 -- host/auth.sh@44 -- # digest=sha256 00:20:36.456 16:13:06 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:36.456 16:13:06 -- host/auth.sh@44 -- # keyid=3 00:20:36.456 16:13:06 -- host/auth.sh@45 -- # key=DHHC-1:02:MWY5YTE3YTYzMjVkNmJjZDQyMTU4MzgzYzdlN2VjM2FkZTIwYWVhYWEzYmM2YzkwePeV2g==: 00:20:36.456 16:13:06 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:36.456 16:13:06 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:36.456 16:13:06 -- host/auth.sh@49 -- # echo DHHC-1:02:MWY5YTE3YTYzMjVkNmJjZDQyMTU4MzgzYzdlN2VjM2FkZTIwYWVhYWEzYmM2YzkwePeV2g==: 00:20:36.456 16:13:06 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 3 00:20:36.456 16:13:06 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:36.456 16:13:06 -- host/auth.sh@68 -- # digest=sha256 00:20:36.456 16:13:06 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:36.456 16:13:06 -- host/auth.sh@68 -- # keyid=3 00:20:36.456 16:13:06 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:36.456 16:13:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.456 16:13:06 -- common/autotest_common.sh@10 -- # set +x 00:20:36.456 16:13:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.456 16:13:06 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:36.456 16:13:06 -- nvmf/common.sh@717 -- # local ip 00:20:36.456 16:13:06 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:36.456 16:13:06 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:36.456 16:13:06 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:36.456 16:13:06 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:36.456 16:13:06 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:36.456 16:13:06 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:36.456 16:13:06 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:36.456 16:13:06 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:36.456 16:13:06 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:36.456 16:13:06 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:36.456 16:13:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.456 16:13:06 -- common/autotest_common.sh@10 -- # set +x 00:20:36.715 nvme0n1 00:20:36.715 16:13:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.715 16:13:06 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:36.715 16:13:06 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:36.715 16:13:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.715 16:13:06 -- common/autotest_common.sh@10 -- # set +x 00:20:36.715 16:13:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.715 16:13:06 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.715 16:13:06 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:36.715 16:13:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.715 16:13:06 -- common/autotest_common.sh@10 -- # set +x 00:20:36.715 16:13:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.715 16:13:06 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:36.715 16:13:06 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:20:36.715 16:13:06 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:36.715 16:13:06 -- host/auth.sh@44 -- # digest=sha256 00:20:36.715 16:13:06 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:36.715 16:13:06 -- host/auth.sh@44 -- # keyid=4 00:20:36.715 16:13:06 -- host/auth.sh@45 -- # key=DHHC-1:03:MzE2NWE1NmQzNTFiY2Q4ZjljZDViMDk0NThkMjQ1NjIzZDZhZjNlZjRlZWFhYzc3MzgxMzRiNTE3ZDFjNzgxN9CT5iI=: 00:20:36.715 16:13:06 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:36.715 16:13:06 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:36.715 16:13:06 -- host/auth.sh@49 -- # echo DHHC-1:03:MzE2NWE1NmQzNTFiY2Q4ZjljZDViMDk0NThkMjQ1NjIzZDZhZjNlZjRlZWFhYzc3MzgxMzRiNTE3ZDFjNzgxN9CT5iI=: 00:20:36.715 16:13:06 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 4 00:20:36.715 16:13:06 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:36.715 16:13:06 -- host/auth.sh@68 -- # digest=sha256 00:20:36.715 16:13:06 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:36.715 16:13:06 -- host/auth.sh@68 -- # keyid=4 00:20:36.715 16:13:06 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:36.715 16:13:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.715 16:13:06 -- common/autotest_common.sh@10 -- # set +x 00:20:36.715 16:13:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.007 16:13:06 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:37.008 16:13:06 -- nvmf/common.sh@717 -- # local ip 00:20:37.008 16:13:06 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:37.008 16:13:06 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:37.008 16:13:06 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:37.008 16:13:06 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:37.008 16:13:06 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:37.008 16:13:06 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:37.008 16:13:06 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:37.008 16:13:06 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:37.008 16:13:06 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:37.008 16:13:06 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:37.008 16:13:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.008 16:13:06 -- common/autotest_common.sh@10 -- # set +x 00:20:37.267 nvme0n1 00:20:37.267 16:13:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.267 16:13:07 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:37.267 16:13:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.267 16:13:07 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:37.267 16:13:07 -- common/autotest_common.sh@10 -- # set +x 00:20:37.267 16:13:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.267 16:13:07 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.267 16:13:07 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:37.267 16:13:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.267 16:13:07 -- common/autotest_common.sh@10 -- # set +x 00:20:37.267 16:13:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.267 16:13:07 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:37.267 16:13:07 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:37.267 16:13:07 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:20:37.267 16:13:07 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:37.267 16:13:07 -- host/auth.sh@44 -- # digest=sha256 00:20:37.267 16:13:07 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:37.267 16:13:07 -- host/auth.sh@44 -- # keyid=0 00:20:37.267 16:13:07 -- host/auth.sh@45 -- # key=DHHC-1:00:OWEyZjQyZjIwNjc4ODMyYWE1MzEyODBhM2Q1YzdjNjLzphD/: 00:20:37.267 16:13:07 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:37.267 16:13:07 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:41.458 16:13:10 -- host/auth.sh@49 -- # echo DHHC-1:00:OWEyZjQyZjIwNjc4ODMyYWE1MzEyODBhM2Q1YzdjNjLzphD/: 00:20:41.458 16:13:10 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 0 00:20:41.458 16:13:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:41.458 16:13:10 -- host/auth.sh@68 -- # digest=sha256 00:20:41.458 16:13:10 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:41.458 16:13:10 -- host/auth.sh@68 -- # keyid=0 00:20:41.458 16:13:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:41.458 16:13:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:41.458 16:13:10 -- common/autotest_common.sh@10 -- # set +x 00:20:41.458 16:13:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:41.458 16:13:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:41.458 16:13:10 -- nvmf/common.sh@717 -- # local ip 00:20:41.458 16:13:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:41.458 16:13:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:41.458 16:13:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:41.458 16:13:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:41.458 16:13:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:41.458 16:13:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:41.458 16:13:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:41.458 16:13:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:41.458 16:13:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:41.458 16:13:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:41.458 16:13:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:41.458 16:13:10 -- common/autotest_common.sh@10 -- # set +x 00:20:41.458 nvme0n1 00:20:41.458 16:13:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:41.458 16:13:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:41.458 16:13:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:41.458 16:13:11 -- common/autotest_common.sh@10 -- # set +x 00:20:41.458 16:13:11 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:41.458 16:13:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:41.458 16:13:11 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.458 16:13:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:41.458 16:13:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:41.458 16:13:11 -- common/autotest_common.sh@10 -- # set +x 00:20:41.458 16:13:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:41.458 16:13:11 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:41.458 16:13:11 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:20:41.458 16:13:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:41.458 16:13:11 -- host/auth.sh@44 -- # digest=sha256 00:20:41.458 16:13:11 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:41.458 16:13:11 -- host/auth.sh@44 -- # keyid=1 00:20:41.458 16:13:11 -- host/auth.sh@45 -- # key=DHHC-1:00:YmJmZjA1NThmZjIwYWI5ZmIyZmM1OWRkYzQ5NzI3OGMwZGI4YWM3MzU4ODgwYzgz9M+1tw==: 00:20:41.458 16:13:11 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:41.458 16:13:11 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:41.458 16:13:11 -- host/auth.sh@49 -- # echo DHHC-1:00:YmJmZjA1NThmZjIwYWI5ZmIyZmM1OWRkYzQ5NzI3OGMwZGI4YWM3MzU4ODgwYzgz9M+1tw==: 00:20:41.458 16:13:11 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 1 00:20:41.458 16:13:11 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:41.458 16:13:11 -- host/auth.sh@68 -- # digest=sha256 00:20:41.458 16:13:11 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:41.458 16:13:11 -- host/auth.sh@68 -- # keyid=1 00:20:41.458 16:13:11 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:41.458 16:13:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:41.458 16:13:11 -- common/autotest_common.sh@10 -- # set +x 00:20:41.458 16:13:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:41.458 16:13:11 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:41.458 16:13:11 -- nvmf/common.sh@717 -- # local ip 00:20:41.458 16:13:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:41.458 16:13:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:41.458 16:13:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:41.458 16:13:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:41.458 16:13:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:41.458 16:13:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:41.458 16:13:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:41.458 16:13:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:41.458 16:13:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:41.458 16:13:11 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:41.458 16:13:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:41.458 16:13:11 -- common/autotest_common.sh@10 -- # set +x 00:20:42.025 nvme0n1 00:20:42.025 16:13:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:42.025 16:13:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:42.025 16:13:11 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:42.025 16:13:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:42.025 16:13:11 -- common/autotest_common.sh@10 -- # set +x 00:20:42.025 16:13:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:42.025 16:13:11 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.025 16:13:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:42.025 16:13:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:42.025 16:13:11 -- common/autotest_common.sh@10 -- # set +x 00:20:42.025 16:13:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:42.025 16:13:11 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:42.025 16:13:11 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:20:42.025 16:13:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:42.025 16:13:11 -- host/auth.sh@44 -- # digest=sha256 00:20:42.025 16:13:11 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:42.025 16:13:11 -- host/auth.sh@44 -- # keyid=2 00:20:42.025 16:13:11 -- host/auth.sh@45 -- # key=DHHC-1:01:YTRhOTBhNzU5Zjg3MzZmMGVlMDNiNTZjYWY0ODlkMjN7NoLX: 00:20:42.025 16:13:11 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:42.025 16:13:11 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:42.025 16:13:11 -- host/auth.sh@49 -- # echo DHHC-1:01:YTRhOTBhNzU5Zjg3MzZmMGVlMDNiNTZjYWY0ODlkMjN7NoLX: 00:20:42.025 16:13:11 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 2 00:20:42.025 16:13:11 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:42.025 16:13:11 -- host/auth.sh@68 -- # digest=sha256 00:20:42.025 16:13:11 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:42.025 16:13:11 -- host/auth.sh@68 -- # keyid=2 00:20:42.025 16:13:11 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:42.025 16:13:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:42.025 16:13:11 -- common/autotest_common.sh@10 -- # set +x 00:20:42.025 16:13:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:42.025 16:13:11 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:42.025 16:13:11 -- nvmf/common.sh@717 -- # local ip 00:20:42.025 16:13:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:42.025 16:13:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:42.025 16:13:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:42.025 16:13:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:42.025 16:13:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:42.025 16:13:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:42.025 16:13:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:42.025 16:13:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:42.025 16:13:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:42.025 16:13:11 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:42.025 16:13:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:42.025 16:13:11 -- common/autotest_common.sh@10 -- # set +x 00:20:42.628 nvme0n1 00:20:42.628 16:13:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:42.628 16:13:12 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:42.628 16:13:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:42.628 16:13:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:42.628 16:13:12 -- common/autotest_common.sh@10 -- # set +x 00:20:42.628 16:13:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:42.628 16:13:12 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.628 16:13:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:42.628 16:13:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:42.628 16:13:12 -- common/autotest_common.sh@10 -- # set +x 00:20:42.628 16:13:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:42.628 16:13:12 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:42.628 16:13:12 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:20:42.628 16:13:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:42.628 16:13:12 -- host/auth.sh@44 -- # digest=sha256 00:20:42.628 16:13:12 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:42.628 16:13:12 -- host/auth.sh@44 -- # keyid=3 00:20:42.628 16:13:12 -- host/auth.sh@45 -- # key=DHHC-1:02:MWY5YTE3YTYzMjVkNmJjZDQyMTU4MzgzYzdlN2VjM2FkZTIwYWVhYWEzYmM2YzkwePeV2g==: 00:20:42.628 16:13:12 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:42.628 16:13:12 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:42.628 16:13:12 -- host/auth.sh@49 -- # echo DHHC-1:02:MWY5YTE3YTYzMjVkNmJjZDQyMTU4MzgzYzdlN2VjM2FkZTIwYWVhYWEzYmM2YzkwePeV2g==: 00:20:42.628 16:13:12 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 3 00:20:42.628 16:13:12 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:42.628 16:13:12 -- host/auth.sh@68 -- # digest=sha256 00:20:42.628 16:13:12 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:42.628 16:13:12 -- host/auth.sh@68 -- # keyid=3 00:20:42.628 16:13:12 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:42.628 16:13:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:42.628 16:13:12 -- common/autotest_common.sh@10 -- # set +x 00:20:42.628 16:13:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:42.628 16:13:12 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:42.628 16:13:12 -- nvmf/common.sh@717 -- # local ip 00:20:42.628 16:13:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:42.628 16:13:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:42.628 16:13:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:42.628 16:13:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:42.628 16:13:12 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:42.628 16:13:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:42.628 16:13:12 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:42.628 16:13:12 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:42.628 16:13:12 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:42.628 16:13:12 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:42.628 16:13:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:42.628 16:13:12 -- common/autotest_common.sh@10 -- # set +x 00:20:43.561 nvme0n1 00:20:43.561 16:13:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:43.561 16:13:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:43.561 16:13:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:43.561 16:13:13 -- common/autotest_common.sh@10 -- # set +x 00:20:43.561 16:13:13 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:43.562 16:13:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:43.562 16:13:13 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.562 16:13:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:43.562 16:13:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:43.562 16:13:13 -- common/autotest_common.sh@10 -- # set +x 00:20:43.562 16:13:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:43.562 16:13:13 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:43.562 16:13:13 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:20:43.562 16:13:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:43.562 16:13:13 -- host/auth.sh@44 -- # digest=sha256 00:20:43.562 16:13:13 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:43.562 16:13:13 -- host/auth.sh@44 -- # keyid=4 00:20:43.562 16:13:13 -- host/auth.sh@45 -- # key=DHHC-1:03:MzE2NWE1NmQzNTFiY2Q4ZjljZDViMDk0NThkMjQ1NjIzZDZhZjNlZjRlZWFhYzc3MzgxMzRiNTE3ZDFjNzgxN9CT5iI=: 00:20:43.562 16:13:13 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:43.562 16:13:13 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:43.562 16:13:13 -- host/auth.sh@49 -- # echo DHHC-1:03:MzE2NWE1NmQzNTFiY2Q4ZjljZDViMDk0NThkMjQ1NjIzZDZhZjNlZjRlZWFhYzc3MzgxMzRiNTE3ZDFjNzgxN9CT5iI=: 00:20:43.562 16:13:13 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 4 00:20:43.562 16:13:13 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:43.562 16:13:13 -- host/auth.sh@68 -- # digest=sha256 00:20:43.562 16:13:13 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:43.562 16:13:13 -- host/auth.sh@68 -- # keyid=4 00:20:43.562 16:13:13 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:43.562 16:13:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:43.562 16:13:13 -- common/autotest_common.sh@10 -- # set +x 00:20:43.562 16:13:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:43.562 16:13:13 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:43.562 16:13:13 -- nvmf/common.sh@717 -- # local ip 00:20:43.562 16:13:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:43.562 16:13:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:43.562 16:13:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:43.562 16:13:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:43.562 16:13:13 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:43.562 16:13:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:43.562 16:13:13 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:43.562 16:13:13 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:43.562 16:13:13 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:43.562 16:13:13 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:43.562 16:13:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:43.562 16:13:13 -- common/autotest_common.sh@10 -- # set +x 00:20:44.128 nvme0n1 00:20:44.128 16:13:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.128 16:13:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:44.128 16:13:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.128 16:13:13 -- common/autotest_common.sh@10 -- # set +x 00:20:44.128 16:13:13 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:44.128 16:13:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.128 16:13:13 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.128 16:13:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:44.128 16:13:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.128 16:13:13 -- common/autotest_common.sh@10 -- # set +x 00:20:44.128 16:13:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.128 16:13:13 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:20:44.128 16:13:13 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:44.128 16:13:13 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:44.128 16:13:13 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:20:44.128 16:13:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:44.128 16:13:13 -- host/auth.sh@44 -- # digest=sha384 00:20:44.128 16:13:13 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:44.128 16:13:13 -- host/auth.sh@44 -- # keyid=0 00:20:44.128 16:13:13 -- host/auth.sh@45 -- # key=DHHC-1:00:OWEyZjQyZjIwNjc4ODMyYWE1MzEyODBhM2Q1YzdjNjLzphD/: 00:20:44.128 16:13:13 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:44.128 16:13:13 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:44.128 16:13:13 -- host/auth.sh@49 -- # echo DHHC-1:00:OWEyZjQyZjIwNjc4ODMyYWE1MzEyODBhM2Q1YzdjNjLzphD/: 00:20:44.128 16:13:13 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 0 00:20:44.128 16:13:13 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:44.128 16:13:13 -- host/auth.sh@68 -- # digest=sha384 00:20:44.128 16:13:13 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:44.128 16:13:13 -- host/auth.sh@68 -- # keyid=0 00:20:44.128 16:13:13 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:44.128 16:13:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.128 16:13:13 -- common/autotest_common.sh@10 -- # set +x 00:20:44.128 16:13:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.128 16:13:13 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:44.128 16:13:13 -- nvmf/common.sh@717 -- # local ip 00:20:44.128 16:13:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:44.128 16:13:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:44.128 16:13:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:44.128 16:13:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:44.128 16:13:13 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:44.128 16:13:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:44.128 16:13:13 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:44.128 16:13:13 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:44.128 16:13:13 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:44.128 16:13:13 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:44.128 16:13:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.128 16:13:13 -- common/autotest_common.sh@10 -- # set +x 00:20:44.128 nvme0n1 00:20:44.128 16:13:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.128 16:13:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:44.128 16:13:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:44.128 16:13:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.128 16:13:14 -- common/autotest_common.sh@10 -- # set +x 00:20:44.128 16:13:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.387 16:13:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.387 16:13:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:44.387 16:13:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.387 16:13:14 -- common/autotest_common.sh@10 -- # set +x 00:20:44.387 16:13:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.387 16:13:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:44.387 16:13:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:20:44.387 16:13:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:44.387 16:13:14 -- host/auth.sh@44 -- # digest=sha384 00:20:44.387 16:13:14 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:44.387 16:13:14 -- host/auth.sh@44 -- # keyid=1 00:20:44.387 16:13:14 -- host/auth.sh@45 -- # key=DHHC-1:00:YmJmZjA1NThmZjIwYWI5ZmIyZmM1OWRkYzQ5NzI3OGMwZGI4YWM3MzU4ODgwYzgz9M+1tw==: 00:20:44.387 16:13:14 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:44.387 16:13:14 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:44.387 16:13:14 -- host/auth.sh@49 -- # echo DHHC-1:00:YmJmZjA1NThmZjIwYWI5ZmIyZmM1OWRkYzQ5NzI3OGMwZGI4YWM3MzU4ODgwYzgz9M+1tw==: 00:20:44.387 16:13:14 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 1 00:20:44.387 16:13:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:44.387 16:13:14 -- host/auth.sh@68 -- # digest=sha384 00:20:44.387 16:13:14 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:44.387 16:13:14 -- host/auth.sh@68 -- # keyid=1 00:20:44.387 16:13:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:44.387 16:13:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.387 16:13:14 -- common/autotest_common.sh@10 -- # set +x 00:20:44.387 16:13:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.387 16:13:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:44.387 16:13:14 -- nvmf/common.sh@717 -- # local ip 00:20:44.387 16:13:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:44.387 16:13:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:44.387 16:13:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:44.387 16:13:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:44.387 16:13:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:44.387 16:13:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:44.387 16:13:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:44.387 16:13:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:44.387 16:13:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:44.387 16:13:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:44.387 16:13:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.387 16:13:14 -- common/autotest_common.sh@10 -- # set +x 00:20:44.387 nvme0n1 00:20:44.387 16:13:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.387 16:13:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:44.387 16:13:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:44.387 16:13:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.387 16:13:14 -- common/autotest_common.sh@10 -- # set +x 00:20:44.387 16:13:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.387 16:13:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.387 16:13:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:44.387 16:13:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.387 16:13:14 -- common/autotest_common.sh@10 -- # set +x 00:20:44.387 16:13:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.387 16:13:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:44.387 16:13:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:20:44.387 16:13:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:44.387 16:13:14 -- host/auth.sh@44 -- # digest=sha384 00:20:44.387 16:13:14 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:44.387 16:13:14 -- host/auth.sh@44 -- # keyid=2 00:20:44.387 16:13:14 -- host/auth.sh@45 -- # key=DHHC-1:01:YTRhOTBhNzU5Zjg3MzZmMGVlMDNiNTZjYWY0ODlkMjN7NoLX: 00:20:44.387 16:13:14 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:44.387 16:13:14 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:44.387 16:13:14 -- host/auth.sh@49 -- # echo DHHC-1:01:YTRhOTBhNzU5Zjg3MzZmMGVlMDNiNTZjYWY0ODlkMjN7NoLX: 00:20:44.387 16:13:14 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 2 00:20:44.387 16:13:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:44.387 16:13:14 -- host/auth.sh@68 -- # digest=sha384 00:20:44.387 16:13:14 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:44.387 16:13:14 -- host/auth.sh@68 -- # keyid=2 00:20:44.387 16:13:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:44.387 16:13:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.387 16:13:14 -- common/autotest_common.sh@10 -- # set +x 00:20:44.387 16:13:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.387 16:13:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:44.387 16:13:14 -- nvmf/common.sh@717 -- # local ip 00:20:44.387 16:13:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:44.387 16:13:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:44.387 16:13:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:44.387 16:13:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:44.387 16:13:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:44.387 16:13:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:44.387 16:13:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:44.387 16:13:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:44.387 16:13:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:44.387 16:13:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:44.387 16:13:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.387 16:13:14 -- common/autotest_common.sh@10 -- # set +x 00:20:44.645 nvme0n1 00:20:44.645 16:13:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.645 16:13:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:44.645 16:13:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:44.645 16:13:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.645 16:13:14 -- common/autotest_common.sh@10 -- # set +x 00:20:44.645 16:13:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.645 16:13:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.645 16:13:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:44.645 16:13:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.645 16:13:14 -- common/autotest_common.sh@10 -- # set +x 00:20:44.645 16:13:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.645 16:13:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:44.645 16:13:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:20:44.645 16:13:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:44.645 16:13:14 -- host/auth.sh@44 -- # digest=sha384 00:20:44.645 16:13:14 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:44.645 16:13:14 -- host/auth.sh@44 -- # keyid=3 00:20:44.645 16:13:14 -- host/auth.sh@45 -- # key=DHHC-1:02:MWY5YTE3YTYzMjVkNmJjZDQyMTU4MzgzYzdlN2VjM2FkZTIwYWVhYWEzYmM2YzkwePeV2g==: 00:20:44.645 16:13:14 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:44.645 16:13:14 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:44.645 16:13:14 -- host/auth.sh@49 -- # echo DHHC-1:02:MWY5YTE3YTYzMjVkNmJjZDQyMTU4MzgzYzdlN2VjM2FkZTIwYWVhYWEzYmM2YzkwePeV2g==: 00:20:44.645 16:13:14 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 3 00:20:44.645 16:13:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:44.645 16:13:14 -- host/auth.sh@68 -- # digest=sha384 00:20:44.645 16:13:14 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:44.645 16:13:14 -- host/auth.sh@68 -- # keyid=3 00:20:44.645 16:13:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:44.645 16:13:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.645 16:13:14 -- common/autotest_common.sh@10 -- # set +x 00:20:44.645 16:13:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.646 16:13:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:44.646 16:13:14 -- nvmf/common.sh@717 -- # local ip 00:20:44.646 16:13:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:44.646 16:13:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:44.646 16:13:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:44.646 16:13:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:44.646 16:13:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:44.646 16:13:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:44.646 16:13:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:44.646 16:13:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:44.646 16:13:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:44.646 16:13:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:44.646 16:13:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.646 16:13:14 -- common/autotest_common.sh@10 -- # set +x 00:20:44.646 nvme0n1 00:20:44.646 16:13:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.646 16:13:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:44.646 16:13:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:44.646 16:13:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.646 16:13:14 -- common/autotest_common.sh@10 -- # set +x 00:20:44.646 16:13:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.905 16:13:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.905 16:13:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:44.905 16:13:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.905 16:13:14 -- common/autotest_common.sh@10 -- # set +x 00:20:44.905 16:13:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.905 16:13:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:44.905 16:13:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:20:44.905 16:13:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:44.905 16:13:14 -- host/auth.sh@44 -- # digest=sha384 00:20:44.905 16:13:14 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:44.905 16:13:14 -- host/auth.sh@44 -- # keyid=4 00:20:44.905 16:13:14 -- host/auth.sh@45 -- # key=DHHC-1:03:MzE2NWE1NmQzNTFiY2Q4ZjljZDViMDk0NThkMjQ1NjIzZDZhZjNlZjRlZWFhYzc3MzgxMzRiNTE3ZDFjNzgxN9CT5iI=: 00:20:44.905 16:13:14 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:44.905 16:13:14 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:44.905 16:13:14 -- host/auth.sh@49 -- # echo DHHC-1:03:MzE2NWE1NmQzNTFiY2Q4ZjljZDViMDk0NThkMjQ1NjIzZDZhZjNlZjRlZWFhYzc3MzgxMzRiNTE3ZDFjNzgxN9CT5iI=: 00:20:44.905 16:13:14 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 4 00:20:44.905 16:13:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:44.905 16:13:14 -- host/auth.sh@68 -- # digest=sha384 00:20:44.905 16:13:14 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:44.905 16:13:14 -- host/auth.sh@68 -- # keyid=4 00:20:44.905 16:13:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:44.905 16:13:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.905 16:13:14 -- common/autotest_common.sh@10 -- # set +x 00:20:44.905 16:13:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.905 16:13:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:44.905 16:13:14 -- nvmf/common.sh@717 -- # local ip 00:20:44.905 16:13:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:44.905 16:13:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:44.905 16:13:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:44.905 16:13:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:44.905 16:13:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:44.905 16:13:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:44.905 16:13:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:44.905 16:13:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:44.905 16:13:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:44.905 16:13:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:44.905 16:13:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.905 16:13:14 -- common/autotest_common.sh@10 -- # set +x 00:20:44.905 nvme0n1 00:20:44.905 16:13:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.905 16:13:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:44.905 16:13:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:44.905 16:13:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.905 16:13:14 -- common/autotest_common.sh@10 -- # set +x 00:20:44.905 16:13:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.905 16:13:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.905 16:13:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:44.905 16:13:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.905 16:13:14 -- common/autotest_common.sh@10 -- # set +x 00:20:44.905 16:13:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.905 16:13:14 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:44.905 16:13:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:44.905 16:13:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:20:44.905 16:13:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:44.905 16:13:14 -- host/auth.sh@44 -- # digest=sha384 00:20:44.905 16:13:14 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:44.905 16:13:14 -- host/auth.sh@44 -- # keyid=0 00:20:44.905 16:13:14 -- host/auth.sh@45 -- # key=DHHC-1:00:OWEyZjQyZjIwNjc4ODMyYWE1MzEyODBhM2Q1YzdjNjLzphD/: 00:20:44.905 16:13:14 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:44.905 16:13:14 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:44.905 16:13:14 -- host/auth.sh@49 -- # echo DHHC-1:00:OWEyZjQyZjIwNjc4ODMyYWE1MzEyODBhM2Q1YzdjNjLzphD/: 00:20:44.905 16:13:14 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 0 00:20:44.905 16:13:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:44.905 16:13:14 -- host/auth.sh@68 -- # digest=sha384 00:20:44.905 16:13:14 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:44.905 16:13:14 -- host/auth.sh@68 -- # keyid=0 00:20:44.905 16:13:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:44.905 16:13:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.905 16:13:14 -- common/autotest_common.sh@10 -- # set +x 00:20:44.905 16:13:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.905 16:13:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:44.905 16:13:14 -- nvmf/common.sh@717 -- # local ip 00:20:44.905 16:13:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:44.905 16:13:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:44.905 16:13:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:44.905 16:13:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:44.905 16:13:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:44.905 16:13:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:44.905 16:13:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:44.905 16:13:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:44.905 16:13:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:44.905 16:13:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:44.905 16:13:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:44.905 16:13:14 -- common/autotest_common.sh@10 -- # set +x 00:20:45.164 nvme0n1 00:20:45.164 16:13:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:45.164 16:13:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:45.164 16:13:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:45.164 16:13:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:45.164 16:13:14 -- common/autotest_common.sh@10 -- # set +x 00:20:45.164 16:13:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:45.164 16:13:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.164 16:13:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:45.164 16:13:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:45.164 16:13:14 -- common/autotest_common.sh@10 -- # set +x 00:20:45.164 16:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:45.164 16:13:15 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:45.164 16:13:15 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:20:45.164 16:13:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:45.164 16:13:15 -- host/auth.sh@44 -- # digest=sha384 00:20:45.164 16:13:15 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:45.165 16:13:15 -- host/auth.sh@44 -- # keyid=1 00:20:45.165 16:13:15 -- host/auth.sh@45 -- # key=DHHC-1:00:YmJmZjA1NThmZjIwYWI5ZmIyZmM1OWRkYzQ5NzI3OGMwZGI4YWM3MzU4ODgwYzgz9M+1tw==: 00:20:45.165 16:13:15 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:45.165 16:13:15 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:45.165 16:13:15 -- host/auth.sh@49 -- # echo DHHC-1:00:YmJmZjA1NThmZjIwYWI5ZmIyZmM1OWRkYzQ5NzI3OGMwZGI4YWM3MzU4ODgwYzgz9M+1tw==: 00:20:45.165 16:13:15 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 1 00:20:45.165 16:13:15 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:45.165 16:13:15 -- host/auth.sh@68 -- # digest=sha384 00:20:45.165 16:13:15 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:45.165 16:13:15 -- host/auth.sh@68 -- # keyid=1 00:20:45.165 16:13:15 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:45.165 16:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:45.165 16:13:15 -- common/autotest_common.sh@10 -- # set +x 00:20:45.165 16:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:45.165 16:13:15 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:45.165 16:13:15 -- nvmf/common.sh@717 -- # local ip 00:20:45.165 16:13:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:45.165 16:13:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:45.165 16:13:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:45.165 16:13:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:45.165 16:13:15 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:45.165 16:13:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:45.165 16:13:15 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:45.165 16:13:15 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:45.165 16:13:15 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:45.165 16:13:15 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:45.165 16:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:45.165 16:13:15 -- common/autotest_common.sh@10 -- # set +x 00:20:45.423 nvme0n1 00:20:45.423 16:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:45.423 16:13:15 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:45.423 16:13:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:45.423 16:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:45.423 16:13:15 -- common/autotest_common.sh@10 -- # set +x 00:20:45.423 16:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:45.423 16:13:15 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.423 16:13:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:45.423 16:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:45.423 16:13:15 -- common/autotest_common.sh@10 -- # set +x 00:20:45.423 16:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:45.423 16:13:15 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:45.423 16:13:15 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:20:45.423 16:13:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:45.423 16:13:15 -- host/auth.sh@44 -- # digest=sha384 00:20:45.423 16:13:15 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:45.423 16:13:15 -- host/auth.sh@44 -- # keyid=2 00:20:45.423 16:13:15 -- host/auth.sh@45 -- # key=DHHC-1:01:YTRhOTBhNzU5Zjg3MzZmMGVlMDNiNTZjYWY0ODlkMjN7NoLX: 00:20:45.423 16:13:15 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:45.423 16:13:15 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:45.423 16:13:15 -- host/auth.sh@49 -- # echo DHHC-1:01:YTRhOTBhNzU5Zjg3MzZmMGVlMDNiNTZjYWY0ODlkMjN7NoLX: 00:20:45.423 16:13:15 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 2 00:20:45.423 16:13:15 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:45.423 16:13:15 -- host/auth.sh@68 -- # digest=sha384 00:20:45.423 16:13:15 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:45.423 16:13:15 -- host/auth.sh@68 -- # keyid=2 00:20:45.423 16:13:15 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:45.423 16:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:45.423 16:13:15 -- common/autotest_common.sh@10 -- # set +x 00:20:45.423 16:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:45.423 16:13:15 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:45.423 16:13:15 -- nvmf/common.sh@717 -- # local ip 00:20:45.423 16:13:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:45.423 16:13:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:45.423 16:13:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:45.423 16:13:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:45.423 16:13:15 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:45.423 16:13:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:45.423 16:13:15 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:45.423 16:13:15 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:45.423 16:13:15 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:45.423 16:13:15 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:45.423 16:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:45.423 16:13:15 -- common/autotest_common.sh@10 -- # set +x 00:20:45.423 nvme0n1 00:20:45.423 16:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:45.423 16:13:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:45.423 16:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:45.423 16:13:15 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:45.423 16:13:15 -- common/autotest_common.sh@10 -- # set +x 00:20:45.423 16:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:45.681 16:13:15 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.681 16:13:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:45.681 16:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:45.681 16:13:15 -- common/autotest_common.sh@10 -- # set +x 00:20:45.681 16:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:45.681 16:13:15 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:45.681 16:13:15 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:20:45.681 16:13:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:45.681 16:13:15 -- host/auth.sh@44 -- # digest=sha384 00:20:45.681 16:13:15 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:45.681 16:13:15 -- host/auth.sh@44 -- # keyid=3 00:20:45.681 16:13:15 -- host/auth.sh@45 -- # key=DHHC-1:02:MWY5YTE3YTYzMjVkNmJjZDQyMTU4MzgzYzdlN2VjM2FkZTIwYWVhYWEzYmM2YzkwePeV2g==: 00:20:45.681 16:13:15 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:45.681 16:13:15 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:45.681 16:13:15 -- host/auth.sh@49 -- # echo DHHC-1:02:MWY5YTE3YTYzMjVkNmJjZDQyMTU4MzgzYzdlN2VjM2FkZTIwYWVhYWEzYmM2YzkwePeV2g==: 00:20:45.681 16:13:15 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 3 00:20:45.681 16:13:15 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:45.681 16:13:15 -- host/auth.sh@68 -- # digest=sha384 00:20:45.681 16:13:15 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:45.681 16:13:15 -- host/auth.sh@68 -- # keyid=3 00:20:45.681 16:13:15 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:45.681 16:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:45.681 16:13:15 -- common/autotest_common.sh@10 -- # set +x 00:20:45.681 16:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:45.681 16:13:15 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:45.681 16:13:15 -- nvmf/common.sh@717 -- # local ip 00:20:45.681 16:13:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:45.681 16:13:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:45.681 16:13:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:45.681 16:13:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:45.681 16:13:15 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:45.681 16:13:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:45.681 16:13:15 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:45.681 16:13:15 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:45.681 16:13:15 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:45.681 16:13:15 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:45.681 16:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:45.681 16:13:15 -- common/autotest_common.sh@10 -- # set +x 00:20:45.681 nvme0n1 00:20:45.681 16:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:45.681 16:13:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:45.681 16:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:45.681 16:13:15 -- common/autotest_common.sh@10 -- # set +x 00:20:45.681 16:13:15 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:45.681 16:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:45.681 16:13:15 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.681 16:13:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:45.681 16:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:45.681 16:13:15 -- common/autotest_common.sh@10 -- # set +x 00:20:45.681 16:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:45.681 16:13:15 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:45.681 16:13:15 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:20:45.681 16:13:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:45.681 16:13:15 -- host/auth.sh@44 -- # digest=sha384 00:20:45.681 16:13:15 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:45.681 16:13:15 -- host/auth.sh@44 -- # keyid=4 00:20:45.681 16:13:15 -- host/auth.sh@45 -- # key=DHHC-1:03:MzE2NWE1NmQzNTFiY2Q4ZjljZDViMDk0NThkMjQ1NjIzZDZhZjNlZjRlZWFhYzc3MzgxMzRiNTE3ZDFjNzgxN9CT5iI=: 00:20:45.681 16:13:15 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:45.681 16:13:15 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:45.681 16:13:15 -- host/auth.sh@49 -- # echo DHHC-1:03:MzE2NWE1NmQzNTFiY2Q4ZjljZDViMDk0NThkMjQ1NjIzZDZhZjNlZjRlZWFhYzc3MzgxMzRiNTE3ZDFjNzgxN9CT5iI=: 00:20:45.681 16:13:15 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 4 00:20:45.681 16:13:15 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:45.681 16:13:15 -- host/auth.sh@68 -- # digest=sha384 00:20:45.681 16:13:15 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:45.681 16:13:15 -- host/auth.sh@68 -- # keyid=4 00:20:45.681 16:13:15 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:45.681 16:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:45.681 16:13:15 -- common/autotest_common.sh@10 -- # set +x 00:20:45.681 16:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:45.681 16:13:15 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:45.681 16:13:15 -- nvmf/common.sh@717 -- # local ip 00:20:45.681 16:13:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:45.681 16:13:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:45.681 16:13:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:45.681 16:13:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:45.681 16:13:15 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:45.681 16:13:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:45.681 16:13:15 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:45.682 16:13:15 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:45.682 16:13:15 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:45.682 16:13:15 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:45.682 16:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:45.682 16:13:15 -- common/autotest_common.sh@10 -- # set +x 00:20:45.940 nvme0n1 00:20:45.940 16:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:45.940 16:13:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:45.940 16:13:15 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:45.940 16:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:45.940 16:13:15 -- common/autotest_common.sh@10 -- # set +x 00:20:45.940 16:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:45.940 16:13:15 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.940 16:13:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:45.940 16:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:45.940 16:13:15 -- common/autotest_common.sh@10 -- # set +x 00:20:45.940 16:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:45.940 16:13:15 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:45.940 16:13:15 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:45.940 16:13:15 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:20:45.940 16:13:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:45.940 16:13:15 -- host/auth.sh@44 -- # digest=sha384 00:20:45.940 16:13:15 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:45.940 16:13:15 -- host/auth.sh@44 -- # keyid=0 00:20:45.940 16:13:15 -- host/auth.sh@45 -- # key=DHHC-1:00:OWEyZjQyZjIwNjc4ODMyYWE1MzEyODBhM2Q1YzdjNjLzphD/: 00:20:45.940 16:13:15 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:45.940 16:13:15 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:45.940 16:13:15 -- host/auth.sh@49 -- # echo DHHC-1:00:OWEyZjQyZjIwNjc4ODMyYWE1MzEyODBhM2Q1YzdjNjLzphD/: 00:20:45.940 16:13:15 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 0 00:20:45.940 16:13:15 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:45.940 16:13:15 -- host/auth.sh@68 -- # digest=sha384 00:20:45.940 16:13:15 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:45.940 16:13:15 -- host/auth.sh@68 -- # keyid=0 00:20:45.940 16:13:15 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:45.940 16:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:45.940 16:13:15 -- common/autotest_common.sh@10 -- # set +x 00:20:45.940 16:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:45.940 16:13:15 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:45.940 16:13:15 -- nvmf/common.sh@717 -- # local ip 00:20:45.940 16:13:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:45.940 16:13:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:45.940 16:13:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:45.940 16:13:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:45.940 16:13:15 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:45.940 16:13:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:45.940 16:13:15 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:45.940 16:13:15 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:45.940 16:13:15 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:45.940 16:13:15 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:45.940 16:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:45.940 16:13:15 -- common/autotest_common.sh@10 -- # set +x 00:20:46.198 nvme0n1 00:20:46.198 16:13:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:46.198 16:13:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:46.198 16:13:16 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:46.198 16:13:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:46.198 16:13:16 -- common/autotest_common.sh@10 -- # set +x 00:20:46.198 16:13:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:46.198 16:13:16 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.198 16:13:16 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:46.198 16:13:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:46.198 16:13:16 -- common/autotest_common.sh@10 -- # set +x 00:20:46.198 16:13:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:46.198 16:13:16 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:46.198 16:13:16 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:20:46.198 16:13:16 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:46.198 16:13:16 -- host/auth.sh@44 -- # digest=sha384 00:20:46.198 16:13:16 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:46.198 16:13:16 -- host/auth.sh@44 -- # keyid=1 00:20:46.198 16:13:16 -- host/auth.sh@45 -- # key=DHHC-1:00:YmJmZjA1NThmZjIwYWI5ZmIyZmM1OWRkYzQ5NzI3OGMwZGI4YWM3MzU4ODgwYzgz9M+1tw==: 00:20:46.198 16:13:16 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:46.198 16:13:16 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:46.198 16:13:16 -- host/auth.sh@49 -- # echo DHHC-1:00:YmJmZjA1NThmZjIwYWI5ZmIyZmM1OWRkYzQ5NzI3OGMwZGI4YWM3MzU4ODgwYzgz9M+1tw==: 00:20:46.198 16:13:16 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 1 00:20:46.198 16:13:16 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:46.198 16:13:16 -- host/auth.sh@68 -- # digest=sha384 00:20:46.198 16:13:16 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:46.198 16:13:16 -- host/auth.sh@68 -- # keyid=1 00:20:46.198 16:13:16 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:46.198 16:13:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:46.198 16:13:16 -- common/autotest_common.sh@10 -- # set +x 00:20:46.198 16:13:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:46.198 16:13:16 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:46.198 16:13:16 -- nvmf/common.sh@717 -- # local ip 00:20:46.198 16:13:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:46.198 16:13:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:46.198 16:13:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:46.198 16:13:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:46.198 16:13:16 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:46.198 16:13:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:46.198 16:13:16 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:46.198 16:13:16 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:46.198 16:13:16 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:46.198 16:13:16 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:46.198 16:13:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:46.198 16:13:16 -- common/autotest_common.sh@10 -- # set +x 00:20:46.457 nvme0n1 00:20:46.457 16:13:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:46.457 16:13:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:46.457 16:13:16 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:46.457 16:13:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:46.457 16:13:16 -- common/autotest_common.sh@10 -- # set +x 00:20:46.457 16:13:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:46.457 16:13:16 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.457 16:13:16 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:46.457 16:13:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:46.457 16:13:16 -- common/autotest_common.sh@10 -- # set +x 00:20:46.457 16:13:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:46.457 16:13:16 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:46.458 16:13:16 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:20:46.458 16:13:16 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:46.458 16:13:16 -- host/auth.sh@44 -- # digest=sha384 00:20:46.458 16:13:16 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:46.458 16:13:16 -- host/auth.sh@44 -- # keyid=2 00:20:46.458 16:13:16 -- host/auth.sh@45 -- # key=DHHC-1:01:YTRhOTBhNzU5Zjg3MzZmMGVlMDNiNTZjYWY0ODlkMjN7NoLX: 00:20:46.458 16:13:16 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:46.458 16:13:16 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:46.458 16:13:16 -- host/auth.sh@49 -- # echo DHHC-1:01:YTRhOTBhNzU5Zjg3MzZmMGVlMDNiNTZjYWY0ODlkMjN7NoLX: 00:20:46.458 16:13:16 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 2 00:20:46.458 16:13:16 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:46.458 16:13:16 -- host/auth.sh@68 -- # digest=sha384 00:20:46.458 16:13:16 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:46.458 16:13:16 -- host/auth.sh@68 -- # keyid=2 00:20:46.458 16:13:16 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:46.458 16:13:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:46.458 16:13:16 -- common/autotest_common.sh@10 -- # set +x 00:20:46.458 16:13:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:46.458 16:13:16 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:46.458 16:13:16 -- nvmf/common.sh@717 -- # local ip 00:20:46.458 16:13:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:46.458 16:13:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:46.458 16:13:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:46.458 16:13:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:46.458 16:13:16 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:46.458 16:13:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:46.458 16:13:16 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:46.458 16:13:16 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:46.458 16:13:16 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:46.458 16:13:16 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:46.458 16:13:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:46.458 16:13:16 -- common/autotest_common.sh@10 -- # set +x 00:20:46.717 nvme0n1 00:20:46.717 16:13:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:46.717 16:13:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:46.717 16:13:16 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:46.717 16:13:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:46.717 16:13:16 -- common/autotest_common.sh@10 -- # set +x 00:20:46.717 16:13:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:46.717 16:13:16 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.717 16:13:16 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:46.717 16:13:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:46.717 16:13:16 -- common/autotest_common.sh@10 -- # set +x 00:20:46.717 16:13:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:46.717 16:13:16 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:46.717 16:13:16 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:20:46.717 16:13:16 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:46.717 16:13:16 -- host/auth.sh@44 -- # digest=sha384 00:20:46.717 16:13:16 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:46.717 16:13:16 -- host/auth.sh@44 -- # keyid=3 00:20:46.717 16:13:16 -- host/auth.sh@45 -- # key=DHHC-1:02:MWY5YTE3YTYzMjVkNmJjZDQyMTU4MzgzYzdlN2VjM2FkZTIwYWVhYWEzYmM2YzkwePeV2g==: 00:20:46.717 16:13:16 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:46.717 16:13:16 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:46.717 16:13:16 -- host/auth.sh@49 -- # echo DHHC-1:02:MWY5YTE3YTYzMjVkNmJjZDQyMTU4MzgzYzdlN2VjM2FkZTIwYWVhYWEzYmM2YzkwePeV2g==: 00:20:46.717 16:13:16 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 3 00:20:46.717 16:13:16 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:46.717 16:13:16 -- host/auth.sh@68 -- # digest=sha384 00:20:46.717 16:13:16 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:46.717 16:13:16 -- host/auth.sh@68 -- # keyid=3 00:20:46.717 16:13:16 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:46.717 16:13:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:46.717 16:13:16 -- common/autotest_common.sh@10 -- # set +x 00:20:46.717 16:13:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:46.717 16:13:16 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:46.717 16:13:16 -- nvmf/common.sh@717 -- # local ip 00:20:46.717 16:13:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:46.717 16:13:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:46.717 16:13:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:46.717 16:13:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:46.717 16:13:16 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:46.717 16:13:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:46.717 16:13:16 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:46.717 16:13:16 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:46.717 16:13:16 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:46.717 16:13:16 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:46.717 16:13:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:46.717 16:13:16 -- common/autotest_common.sh@10 -- # set +x 00:20:46.975 nvme0n1 00:20:46.975 16:13:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:46.975 16:13:16 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:46.975 16:13:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:46.975 16:13:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:46.975 16:13:16 -- common/autotest_common.sh@10 -- # set +x 00:20:46.975 16:13:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:46.975 16:13:16 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.975 16:13:16 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:46.975 16:13:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:46.975 16:13:16 -- common/autotest_common.sh@10 -- # set +x 00:20:46.975 16:13:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:46.975 16:13:16 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:46.975 16:13:16 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:20:46.975 16:13:16 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:46.975 16:13:16 -- host/auth.sh@44 -- # digest=sha384 00:20:46.975 16:13:16 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:46.975 16:13:16 -- host/auth.sh@44 -- # keyid=4 00:20:46.975 16:13:16 -- host/auth.sh@45 -- # key=DHHC-1:03:MzE2NWE1NmQzNTFiY2Q4ZjljZDViMDk0NThkMjQ1NjIzZDZhZjNlZjRlZWFhYzc3MzgxMzRiNTE3ZDFjNzgxN9CT5iI=: 00:20:46.975 16:13:16 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:46.975 16:13:16 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:46.975 16:13:16 -- host/auth.sh@49 -- # echo DHHC-1:03:MzE2NWE1NmQzNTFiY2Q4ZjljZDViMDk0NThkMjQ1NjIzZDZhZjNlZjRlZWFhYzc3MzgxMzRiNTE3ZDFjNzgxN9CT5iI=: 00:20:46.975 16:13:16 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 4 00:20:46.975 16:13:16 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:46.975 16:13:16 -- host/auth.sh@68 -- # digest=sha384 00:20:46.975 16:13:16 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:46.975 16:13:16 -- host/auth.sh@68 -- # keyid=4 00:20:46.975 16:13:16 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:46.975 16:13:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:46.975 16:13:16 -- common/autotest_common.sh@10 -- # set +x 00:20:46.975 16:13:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:46.975 16:13:16 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:46.975 16:13:16 -- nvmf/common.sh@717 -- # local ip 00:20:46.975 16:13:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:46.975 16:13:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:46.975 16:13:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:46.975 16:13:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:46.975 16:13:16 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:46.975 16:13:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:46.975 16:13:16 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:46.975 16:13:16 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:46.975 16:13:16 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:46.975 16:13:16 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:46.975 16:13:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:46.975 16:13:16 -- common/autotest_common.sh@10 -- # set +x 00:20:47.234 nvme0n1 00:20:47.234 16:13:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.234 16:13:17 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:47.234 16:13:17 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:47.234 16:13:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.234 16:13:17 -- common/autotest_common.sh@10 -- # set +x 00:20:47.234 16:13:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.234 16:13:17 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.234 16:13:17 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:47.234 16:13:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.234 16:13:17 -- common/autotest_common.sh@10 -- # set +x 00:20:47.234 16:13:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.234 16:13:17 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:47.234 16:13:17 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:47.234 16:13:17 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:20:47.234 16:13:17 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:47.234 16:13:17 -- host/auth.sh@44 -- # digest=sha384 00:20:47.234 16:13:17 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:47.234 16:13:17 -- host/auth.sh@44 -- # keyid=0 00:20:47.234 16:13:17 -- host/auth.sh@45 -- # key=DHHC-1:00:OWEyZjQyZjIwNjc4ODMyYWE1MzEyODBhM2Q1YzdjNjLzphD/: 00:20:47.234 16:13:17 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:47.234 16:13:17 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:47.234 16:13:17 -- host/auth.sh@49 -- # echo DHHC-1:00:OWEyZjQyZjIwNjc4ODMyYWE1MzEyODBhM2Q1YzdjNjLzphD/: 00:20:47.234 16:13:17 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 0 00:20:47.234 16:13:17 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:47.234 16:13:17 -- host/auth.sh@68 -- # digest=sha384 00:20:47.234 16:13:17 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:47.234 16:13:17 -- host/auth.sh@68 -- # keyid=0 00:20:47.234 16:13:17 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:47.234 16:13:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.234 16:13:17 -- common/autotest_common.sh@10 -- # set +x 00:20:47.492 16:13:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.492 16:13:17 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:47.492 16:13:17 -- nvmf/common.sh@717 -- # local ip 00:20:47.492 16:13:17 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:47.492 16:13:17 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:47.492 16:13:17 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:47.492 16:13:17 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:47.492 16:13:17 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:47.492 16:13:17 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:47.492 16:13:17 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:47.492 16:13:17 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:47.492 16:13:17 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:47.492 16:13:17 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:47.492 16:13:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.492 16:13:17 -- common/autotest_common.sh@10 -- # set +x 00:20:47.750 nvme0n1 00:20:47.750 16:13:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.750 16:13:17 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:47.750 16:13:17 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:47.750 16:13:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.750 16:13:17 -- common/autotest_common.sh@10 -- # set +x 00:20:47.750 16:13:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.750 16:13:17 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.750 16:13:17 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:47.750 16:13:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.750 16:13:17 -- common/autotest_common.sh@10 -- # set +x 00:20:47.750 16:13:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.750 16:13:17 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:47.750 16:13:17 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:20:47.750 16:13:17 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:47.750 16:13:17 -- host/auth.sh@44 -- # digest=sha384 00:20:47.750 16:13:17 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:47.750 16:13:17 -- host/auth.sh@44 -- # keyid=1 00:20:47.750 16:13:17 -- host/auth.sh@45 -- # key=DHHC-1:00:YmJmZjA1NThmZjIwYWI5ZmIyZmM1OWRkYzQ5NzI3OGMwZGI4YWM3MzU4ODgwYzgz9M+1tw==: 00:20:47.750 16:13:17 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:47.751 16:13:17 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:47.751 16:13:17 -- host/auth.sh@49 -- # echo DHHC-1:00:YmJmZjA1NThmZjIwYWI5ZmIyZmM1OWRkYzQ5NzI3OGMwZGI4YWM3MzU4ODgwYzgz9M+1tw==: 00:20:47.751 16:13:17 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 1 00:20:47.751 16:13:17 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:47.751 16:13:17 -- host/auth.sh@68 -- # digest=sha384 00:20:47.751 16:13:17 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:47.751 16:13:17 -- host/auth.sh@68 -- # keyid=1 00:20:47.751 16:13:17 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:47.751 16:13:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.751 16:13:17 -- common/autotest_common.sh@10 -- # set +x 00:20:47.751 16:13:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.751 16:13:17 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:47.751 16:13:17 -- nvmf/common.sh@717 -- # local ip 00:20:47.751 16:13:17 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:47.751 16:13:17 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:47.751 16:13:17 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:47.751 16:13:17 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:47.751 16:13:17 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:47.751 16:13:17 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:47.751 16:13:17 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:47.751 16:13:17 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:47.751 16:13:17 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:47.751 16:13:17 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:47.751 16:13:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.751 16:13:17 -- common/autotest_common.sh@10 -- # set +x 00:20:48.009 nvme0n1 00:20:48.009 16:13:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:48.268 16:13:17 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:48.269 16:13:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:48.269 16:13:17 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:48.269 16:13:17 -- common/autotest_common.sh@10 -- # set +x 00:20:48.269 16:13:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:48.269 16:13:18 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.269 16:13:18 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:48.269 16:13:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:48.269 16:13:18 -- common/autotest_common.sh@10 -- # set +x 00:20:48.269 16:13:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:48.269 16:13:18 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:48.269 16:13:18 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:20:48.269 16:13:18 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:48.269 16:13:18 -- host/auth.sh@44 -- # digest=sha384 00:20:48.269 16:13:18 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:48.269 16:13:18 -- host/auth.sh@44 -- # keyid=2 00:20:48.269 16:13:18 -- host/auth.sh@45 -- # key=DHHC-1:01:YTRhOTBhNzU5Zjg3MzZmMGVlMDNiNTZjYWY0ODlkMjN7NoLX: 00:20:48.269 16:13:18 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:48.269 16:13:18 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:48.269 16:13:18 -- host/auth.sh@49 -- # echo DHHC-1:01:YTRhOTBhNzU5Zjg3MzZmMGVlMDNiNTZjYWY0ODlkMjN7NoLX: 00:20:48.269 16:13:18 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 2 00:20:48.269 16:13:18 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:48.269 16:13:18 -- host/auth.sh@68 -- # digest=sha384 00:20:48.269 16:13:18 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:48.269 16:13:18 -- host/auth.sh@68 -- # keyid=2 00:20:48.269 16:13:18 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:48.269 16:13:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:48.269 16:13:18 -- common/autotest_common.sh@10 -- # set +x 00:20:48.269 16:13:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:48.269 16:13:18 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:48.269 16:13:18 -- nvmf/common.sh@717 -- # local ip 00:20:48.269 16:13:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:48.269 16:13:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:48.269 16:13:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:48.269 16:13:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:48.269 16:13:18 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:48.269 16:13:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:48.269 16:13:18 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:48.269 16:13:18 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:48.269 16:13:18 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:48.269 16:13:18 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:48.269 16:13:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:48.269 16:13:18 -- common/autotest_common.sh@10 -- # set +x 00:20:48.527 nvme0n1 00:20:48.527 16:13:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:48.527 16:13:18 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:48.527 16:13:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:48.527 16:13:18 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:48.527 16:13:18 -- common/autotest_common.sh@10 -- # set +x 00:20:48.527 16:13:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:48.527 16:13:18 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.527 16:13:18 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:48.527 16:13:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:48.527 16:13:18 -- common/autotest_common.sh@10 -- # set +x 00:20:48.527 16:13:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:48.527 16:13:18 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:48.527 16:13:18 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:20:48.527 16:13:18 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:48.527 16:13:18 -- host/auth.sh@44 -- # digest=sha384 00:20:48.527 16:13:18 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:48.527 16:13:18 -- host/auth.sh@44 -- # keyid=3 00:20:48.527 16:13:18 -- host/auth.sh@45 -- # key=DHHC-1:02:MWY5YTE3YTYzMjVkNmJjZDQyMTU4MzgzYzdlN2VjM2FkZTIwYWVhYWEzYmM2YzkwePeV2g==: 00:20:48.527 16:13:18 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:48.527 16:13:18 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:48.527 16:13:18 -- host/auth.sh@49 -- # echo DHHC-1:02:MWY5YTE3YTYzMjVkNmJjZDQyMTU4MzgzYzdlN2VjM2FkZTIwYWVhYWEzYmM2YzkwePeV2g==: 00:20:48.527 16:13:18 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 3 00:20:48.527 16:13:18 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:48.527 16:13:18 -- host/auth.sh@68 -- # digest=sha384 00:20:48.527 16:13:18 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:48.527 16:13:18 -- host/auth.sh@68 -- # keyid=3 00:20:48.527 16:13:18 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:48.527 16:13:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:48.527 16:13:18 -- common/autotest_common.sh@10 -- # set +x 00:20:48.527 16:13:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:48.527 16:13:18 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:48.527 16:13:18 -- nvmf/common.sh@717 -- # local ip 00:20:48.527 16:13:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:48.527 16:13:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:48.527 16:13:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:48.527 16:13:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:48.527 16:13:18 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:48.527 16:13:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:48.527 16:13:18 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:48.527 16:13:18 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:48.527 16:13:18 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:48.527 16:13:18 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:48.527 16:13:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:48.527 16:13:18 -- common/autotest_common.sh@10 -- # set +x 00:20:49.093 nvme0n1 00:20:49.093 16:13:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:49.093 16:13:18 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:49.093 16:13:18 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:49.093 16:13:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:49.093 16:13:18 -- common/autotest_common.sh@10 -- # set +x 00:20:49.093 16:13:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:49.093 16:13:18 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.093 16:13:18 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:49.093 16:13:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:49.093 16:13:18 -- common/autotest_common.sh@10 -- # set +x 00:20:49.093 16:13:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:49.093 16:13:18 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:49.093 16:13:18 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:20:49.093 16:13:18 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:49.093 16:13:18 -- host/auth.sh@44 -- # digest=sha384 00:20:49.093 16:13:18 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:49.093 16:13:18 -- host/auth.sh@44 -- # keyid=4 00:20:49.093 16:13:18 -- host/auth.sh@45 -- # key=DHHC-1:03:MzE2NWE1NmQzNTFiY2Q4ZjljZDViMDk0NThkMjQ1NjIzZDZhZjNlZjRlZWFhYzc3MzgxMzRiNTE3ZDFjNzgxN9CT5iI=: 00:20:49.093 16:13:18 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:49.093 16:13:18 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:49.093 16:13:18 -- host/auth.sh@49 -- # echo DHHC-1:03:MzE2NWE1NmQzNTFiY2Q4ZjljZDViMDk0NThkMjQ1NjIzZDZhZjNlZjRlZWFhYzc3MzgxMzRiNTE3ZDFjNzgxN9CT5iI=: 00:20:49.093 16:13:18 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 4 00:20:49.093 16:13:18 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:49.093 16:13:18 -- host/auth.sh@68 -- # digest=sha384 00:20:49.093 16:13:18 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:49.093 16:13:18 -- host/auth.sh@68 -- # keyid=4 00:20:49.093 16:13:18 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:49.093 16:13:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:49.093 16:13:18 -- common/autotest_common.sh@10 -- # set +x 00:20:49.093 16:13:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:49.093 16:13:18 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:49.093 16:13:18 -- nvmf/common.sh@717 -- # local ip 00:20:49.093 16:13:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:49.093 16:13:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:49.093 16:13:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:49.093 16:13:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:49.093 16:13:18 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:49.093 16:13:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:49.093 16:13:18 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:49.093 16:13:18 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:49.093 16:13:18 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:49.093 16:13:18 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:49.093 16:13:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:49.093 16:13:18 -- common/autotest_common.sh@10 -- # set +x 00:20:49.351 nvme0n1 00:20:49.352 16:13:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:49.352 16:13:19 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:49.352 16:13:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:49.352 16:13:19 -- common/autotest_common.sh@10 -- # set +x 00:20:49.352 16:13:19 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:49.352 16:13:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:49.352 16:13:19 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.352 16:13:19 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:49.352 16:13:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:49.352 16:13:19 -- common/autotest_common.sh@10 -- # set +x 00:20:49.352 16:13:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:49.352 16:13:19 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:49.352 16:13:19 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:49.352 16:13:19 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:20:49.352 16:13:19 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:49.352 16:13:19 -- host/auth.sh@44 -- # digest=sha384 00:20:49.352 16:13:19 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:49.352 16:13:19 -- host/auth.sh@44 -- # keyid=0 00:20:49.352 16:13:19 -- host/auth.sh@45 -- # key=DHHC-1:00:OWEyZjQyZjIwNjc4ODMyYWE1MzEyODBhM2Q1YzdjNjLzphD/: 00:20:49.352 16:13:19 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:49.352 16:13:19 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:49.352 16:13:19 -- host/auth.sh@49 -- # echo DHHC-1:00:OWEyZjQyZjIwNjc4ODMyYWE1MzEyODBhM2Q1YzdjNjLzphD/: 00:20:49.352 16:13:19 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 0 00:20:49.352 16:13:19 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:49.352 16:13:19 -- host/auth.sh@68 -- # digest=sha384 00:20:49.352 16:13:19 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:49.352 16:13:19 -- host/auth.sh@68 -- # keyid=0 00:20:49.352 16:13:19 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:49.352 16:13:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:49.352 16:13:19 -- common/autotest_common.sh@10 -- # set +x 00:20:49.609 16:13:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:49.609 16:13:19 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:49.609 16:13:19 -- nvmf/common.sh@717 -- # local ip 00:20:49.609 16:13:19 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:49.609 16:13:19 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:49.609 16:13:19 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:49.609 16:13:19 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:49.609 16:13:19 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:49.609 16:13:19 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:49.609 16:13:19 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:49.609 16:13:19 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:49.609 16:13:19 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:49.609 16:13:19 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:49.609 16:13:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:49.609 16:13:19 -- common/autotest_common.sh@10 -- # set +x 00:20:50.175 nvme0n1 00:20:50.175 16:13:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:50.175 16:13:19 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:50.175 16:13:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:50.175 16:13:19 -- common/autotest_common.sh@10 -- # set +x 00:20:50.175 16:13:19 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:50.175 16:13:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:50.175 16:13:19 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.175 16:13:19 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:50.175 16:13:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:50.175 16:13:19 -- common/autotest_common.sh@10 -- # set +x 00:20:50.175 16:13:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:50.175 16:13:19 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:50.175 16:13:19 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:20:50.175 16:13:19 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:50.175 16:13:19 -- host/auth.sh@44 -- # digest=sha384 00:20:50.175 16:13:19 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:50.175 16:13:19 -- host/auth.sh@44 -- # keyid=1 00:20:50.175 16:13:19 -- host/auth.sh@45 -- # key=DHHC-1:00:YmJmZjA1NThmZjIwYWI5ZmIyZmM1OWRkYzQ5NzI3OGMwZGI4YWM3MzU4ODgwYzgz9M+1tw==: 00:20:50.175 16:13:19 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:50.175 16:13:19 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:50.175 16:13:19 -- host/auth.sh@49 -- # echo DHHC-1:00:YmJmZjA1NThmZjIwYWI5ZmIyZmM1OWRkYzQ5NzI3OGMwZGI4YWM3MzU4ODgwYzgz9M+1tw==: 00:20:50.175 16:13:19 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 1 00:20:50.175 16:13:19 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:50.175 16:13:19 -- host/auth.sh@68 -- # digest=sha384 00:20:50.175 16:13:19 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:50.175 16:13:19 -- host/auth.sh@68 -- # keyid=1 00:20:50.175 16:13:19 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:50.175 16:13:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:50.175 16:13:19 -- common/autotest_common.sh@10 -- # set +x 00:20:50.175 16:13:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:50.175 16:13:20 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:50.175 16:13:20 -- nvmf/common.sh@717 -- # local ip 00:20:50.175 16:13:20 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:50.175 16:13:20 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:50.175 16:13:20 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:50.175 16:13:20 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:50.175 16:13:20 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:50.175 16:13:20 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:50.175 16:13:20 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:50.175 16:13:20 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:50.175 16:13:20 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:50.175 16:13:20 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:50.175 16:13:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:50.175 16:13:20 -- common/autotest_common.sh@10 -- # set +x 00:20:50.740 nvme0n1 00:20:50.740 16:13:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:50.740 16:13:20 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:50.740 16:13:20 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:50.740 16:13:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:50.740 16:13:20 -- common/autotest_common.sh@10 -- # set +x 00:20:50.740 16:13:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:50.740 16:13:20 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.740 16:13:20 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:50.740 16:13:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:50.740 16:13:20 -- common/autotest_common.sh@10 -- # set +x 00:20:50.740 16:13:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:50.740 16:13:20 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:50.740 16:13:20 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:20:50.740 16:13:20 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:50.740 16:13:20 -- host/auth.sh@44 -- # digest=sha384 00:20:50.740 16:13:20 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:50.740 16:13:20 -- host/auth.sh@44 -- # keyid=2 00:20:50.740 16:13:20 -- host/auth.sh@45 -- # key=DHHC-1:01:YTRhOTBhNzU5Zjg3MzZmMGVlMDNiNTZjYWY0ODlkMjN7NoLX: 00:20:50.740 16:13:20 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:50.740 16:13:20 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:50.740 16:13:20 -- host/auth.sh@49 -- # echo DHHC-1:01:YTRhOTBhNzU5Zjg3MzZmMGVlMDNiNTZjYWY0ODlkMjN7NoLX: 00:20:50.740 16:13:20 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 2 00:20:50.740 16:13:20 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:50.740 16:13:20 -- host/auth.sh@68 -- # digest=sha384 00:20:50.740 16:13:20 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:50.740 16:13:20 -- host/auth.sh@68 -- # keyid=2 00:20:50.740 16:13:20 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:50.740 16:13:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:50.740 16:13:20 -- common/autotest_common.sh@10 -- # set +x 00:20:50.740 16:13:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:50.740 16:13:20 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:50.740 16:13:20 -- nvmf/common.sh@717 -- # local ip 00:20:50.740 16:13:20 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:50.740 16:13:20 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:50.740 16:13:20 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:50.740 16:13:20 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:50.740 16:13:20 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:50.740 16:13:20 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:50.740 16:13:20 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:50.740 16:13:20 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:50.740 16:13:20 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:50.740 16:13:20 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:50.740 16:13:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:50.740 16:13:20 -- common/autotest_common.sh@10 -- # set +x 00:20:51.674 nvme0n1 00:20:51.674 16:13:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:51.674 16:13:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:51.674 16:13:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:51.674 16:13:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:51.674 16:13:21 -- common/autotest_common.sh@10 -- # set +x 00:20:51.674 16:13:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:51.674 16:13:21 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.674 16:13:21 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:51.674 16:13:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:51.674 16:13:21 -- common/autotest_common.sh@10 -- # set +x 00:20:51.674 16:13:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:51.674 16:13:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:51.674 16:13:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:20:51.674 16:13:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:51.674 16:13:21 -- host/auth.sh@44 -- # digest=sha384 00:20:51.674 16:13:21 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:51.674 16:13:21 -- host/auth.sh@44 -- # keyid=3 00:20:51.674 16:13:21 -- host/auth.sh@45 -- # key=DHHC-1:02:MWY5YTE3YTYzMjVkNmJjZDQyMTU4MzgzYzdlN2VjM2FkZTIwYWVhYWEzYmM2YzkwePeV2g==: 00:20:51.674 16:13:21 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:51.674 16:13:21 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:51.674 16:13:21 -- host/auth.sh@49 -- # echo DHHC-1:02:MWY5YTE3YTYzMjVkNmJjZDQyMTU4MzgzYzdlN2VjM2FkZTIwYWVhYWEzYmM2YzkwePeV2g==: 00:20:51.674 16:13:21 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 3 00:20:51.674 16:13:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:51.674 16:13:21 -- host/auth.sh@68 -- # digest=sha384 00:20:51.674 16:13:21 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:51.674 16:13:21 -- host/auth.sh@68 -- # keyid=3 00:20:51.674 16:13:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:51.674 16:13:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:51.674 16:13:21 -- common/autotest_common.sh@10 -- # set +x 00:20:51.674 16:13:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:51.674 16:13:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:51.674 16:13:21 -- nvmf/common.sh@717 -- # local ip 00:20:51.674 16:13:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:51.674 16:13:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:51.674 16:13:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:51.674 16:13:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:51.674 16:13:21 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:51.674 16:13:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:51.674 16:13:21 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:51.674 16:13:21 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:51.674 16:13:21 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:51.674 16:13:21 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:51.674 16:13:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:51.674 16:13:21 -- common/autotest_common.sh@10 -- # set +x 00:20:52.240 nvme0n1 00:20:52.240 16:13:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.240 16:13:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:52.240 16:13:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:52.240 16:13:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.240 16:13:22 -- common/autotest_common.sh@10 -- # set +x 00:20:52.240 16:13:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.240 16:13:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.240 16:13:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:52.240 16:13:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.240 16:13:22 -- common/autotest_common.sh@10 -- # set +x 00:20:52.240 16:13:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.240 16:13:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:52.240 16:13:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:20:52.240 16:13:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:52.240 16:13:22 -- host/auth.sh@44 -- # digest=sha384 00:20:52.240 16:13:22 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:52.240 16:13:22 -- host/auth.sh@44 -- # keyid=4 00:20:52.240 16:13:22 -- host/auth.sh@45 -- # key=DHHC-1:03:MzE2NWE1NmQzNTFiY2Q4ZjljZDViMDk0NThkMjQ1NjIzZDZhZjNlZjRlZWFhYzc3MzgxMzRiNTE3ZDFjNzgxN9CT5iI=: 00:20:52.240 16:13:22 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:52.240 16:13:22 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:52.240 16:13:22 -- host/auth.sh@49 -- # echo DHHC-1:03:MzE2NWE1NmQzNTFiY2Q4ZjljZDViMDk0NThkMjQ1NjIzZDZhZjNlZjRlZWFhYzc3MzgxMzRiNTE3ZDFjNzgxN9CT5iI=: 00:20:52.240 16:13:22 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 4 00:20:52.240 16:13:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:52.240 16:13:22 -- host/auth.sh@68 -- # digest=sha384 00:20:52.240 16:13:22 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:52.240 16:13:22 -- host/auth.sh@68 -- # keyid=4 00:20:52.240 16:13:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:52.240 16:13:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.240 16:13:22 -- common/autotest_common.sh@10 -- # set +x 00:20:52.240 16:13:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.240 16:13:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:52.240 16:13:22 -- nvmf/common.sh@717 -- # local ip 00:20:52.240 16:13:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:52.240 16:13:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:52.240 16:13:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:52.240 16:13:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:52.240 16:13:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:52.240 16:13:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:52.240 16:13:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:52.240 16:13:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:52.240 16:13:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:52.240 16:13:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:52.240 16:13:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.240 16:13:22 -- common/autotest_common.sh@10 -- # set +x 00:20:52.844 nvme0n1 00:20:52.844 16:13:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.844 16:13:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:52.844 16:13:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:52.844 16:13:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.844 16:13:22 -- common/autotest_common.sh@10 -- # set +x 00:20:52.844 16:13:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.844 16:13:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.844 16:13:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:52.844 16:13:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.844 16:13:22 -- common/autotest_common.sh@10 -- # set +x 00:20:52.844 16:13:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.844 16:13:22 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:20:52.844 16:13:22 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:52.844 16:13:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:52.844 16:13:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:20:52.844 16:13:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:52.844 16:13:22 -- host/auth.sh@44 -- # digest=sha512 00:20:52.844 16:13:22 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:52.844 16:13:22 -- host/auth.sh@44 -- # keyid=0 00:20:52.844 16:13:22 -- host/auth.sh@45 -- # key=DHHC-1:00:OWEyZjQyZjIwNjc4ODMyYWE1MzEyODBhM2Q1YzdjNjLzphD/: 00:20:52.844 16:13:22 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:52.844 16:13:22 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:52.844 16:13:22 -- host/auth.sh@49 -- # echo DHHC-1:00:OWEyZjQyZjIwNjc4ODMyYWE1MzEyODBhM2Q1YzdjNjLzphD/: 00:20:52.844 16:13:22 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 0 00:20:52.844 16:13:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:52.844 16:13:22 -- host/auth.sh@68 -- # digest=sha512 00:20:52.844 16:13:22 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:52.844 16:13:22 -- host/auth.sh@68 -- # keyid=0 00:20:52.844 16:13:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:52.844 16:13:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.844 16:13:22 -- common/autotest_common.sh@10 -- # set +x 00:20:52.844 16:13:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.844 16:13:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:52.844 16:13:22 -- nvmf/common.sh@717 -- # local ip 00:20:52.844 16:13:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:52.844 16:13:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:52.844 16:13:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:52.844 16:13:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:52.844 16:13:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:52.844 16:13:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:52.844 16:13:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:52.844 16:13:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:52.844 16:13:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:52.844 16:13:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:52.844 16:13:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.844 16:13:22 -- common/autotest_common.sh@10 -- # set +x 00:20:53.128 nvme0n1 00:20:53.128 16:13:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:53.128 16:13:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:53.128 16:13:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:53.128 16:13:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:53.128 16:13:22 -- common/autotest_common.sh@10 -- # set +x 00:20:53.128 16:13:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:53.128 16:13:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.128 16:13:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:53.128 16:13:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:53.128 16:13:22 -- common/autotest_common.sh@10 -- # set +x 00:20:53.128 16:13:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:53.128 16:13:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:53.128 16:13:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:20:53.128 16:13:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:53.128 16:13:22 -- host/auth.sh@44 -- # digest=sha512 00:20:53.128 16:13:22 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:53.128 16:13:22 -- host/auth.sh@44 -- # keyid=1 00:20:53.128 16:13:22 -- host/auth.sh@45 -- # key=DHHC-1:00:YmJmZjA1NThmZjIwYWI5ZmIyZmM1OWRkYzQ5NzI3OGMwZGI4YWM3MzU4ODgwYzgz9M+1tw==: 00:20:53.128 16:13:22 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:53.128 16:13:22 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:53.129 16:13:22 -- host/auth.sh@49 -- # echo DHHC-1:00:YmJmZjA1NThmZjIwYWI5ZmIyZmM1OWRkYzQ5NzI3OGMwZGI4YWM3MzU4ODgwYzgz9M+1tw==: 00:20:53.129 16:13:22 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 1 00:20:53.129 16:13:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:53.129 16:13:22 -- host/auth.sh@68 -- # digest=sha512 00:20:53.129 16:13:22 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:53.129 16:13:22 -- host/auth.sh@68 -- # keyid=1 00:20:53.129 16:13:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:53.129 16:13:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:53.129 16:13:22 -- common/autotest_common.sh@10 -- # set +x 00:20:53.129 16:13:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:53.129 16:13:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:53.129 16:13:22 -- nvmf/common.sh@717 -- # local ip 00:20:53.129 16:13:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:53.129 16:13:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:53.129 16:13:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:53.129 16:13:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:53.129 16:13:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:53.129 16:13:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:53.129 16:13:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:53.129 16:13:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:53.129 16:13:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:53.129 16:13:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:53.129 16:13:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:53.129 16:13:22 -- common/autotest_common.sh@10 -- # set +x 00:20:53.129 nvme0n1 00:20:53.129 16:13:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:53.129 16:13:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:53.129 16:13:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:53.129 16:13:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:53.129 16:13:23 -- common/autotest_common.sh@10 -- # set +x 00:20:53.129 16:13:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:53.129 16:13:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.129 16:13:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:53.129 16:13:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:53.129 16:13:23 -- common/autotest_common.sh@10 -- # set +x 00:20:53.129 16:13:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:53.129 16:13:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:53.129 16:13:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:20:53.129 16:13:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:53.129 16:13:23 -- host/auth.sh@44 -- # digest=sha512 00:20:53.129 16:13:23 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:53.129 16:13:23 -- host/auth.sh@44 -- # keyid=2 00:20:53.129 16:13:23 -- host/auth.sh@45 -- # key=DHHC-1:01:YTRhOTBhNzU5Zjg3MzZmMGVlMDNiNTZjYWY0ODlkMjN7NoLX: 00:20:53.129 16:13:23 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:53.129 16:13:23 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:53.129 16:13:23 -- host/auth.sh@49 -- # echo DHHC-1:01:YTRhOTBhNzU5Zjg3MzZmMGVlMDNiNTZjYWY0ODlkMjN7NoLX: 00:20:53.129 16:13:23 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 2 00:20:53.129 16:13:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:53.129 16:13:23 -- host/auth.sh@68 -- # digest=sha512 00:20:53.129 16:13:23 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:53.129 16:13:23 -- host/auth.sh@68 -- # keyid=2 00:20:53.129 16:13:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:53.129 16:13:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:53.129 16:13:23 -- common/autotest_common.sh@10 -- # set +x 00:20:53.129 16:13:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:53.129 16:13:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:53.129 16:13:23 -- nvmf/common.sh@717 -- # local ip 00:20:53.129 16:13:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:53.129 16:13:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:53.129 16:13:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:53.129 16:13:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:53.129 16:13:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:53.129 16:13:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:53.129 16:13:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:53.129 16:13:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:53.129 16:13:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:53.129 16:13:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:53.129 16:13:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:53.129 16:13:23 -- common/autotest_common.sh@10 -- # set +x 00:20:53.386 nvme0n1 00:20:53.386 16:13:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:53.386 16:13:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:53.386 16:13:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:53.386 16:13:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:53.386 16:13:23 -- common/autotest_common.sh@10 -- # set +x 00:20:53.386 16:13:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:53.386 16:13:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.386 16:13:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:53.387 16:13:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:53.387 16:13:23 -- common/autotest_common.sh@10 -- # set +x 00:20:53.387 16:13:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:53.387 16:13:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:53.387 16:13:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:20:53.387 16:13:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:53.387 16:13:23 -- host/auth.sh@44 -- # digest=sha512 00:20:53.387 16:13:23 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:53.387 16:13:23 -- host/auth.sh@44 -- # keyid=3 00:20:53.387 16:13:23 -- host/auth.sh@45 -- # key=DHHC-1:02:MWY5YTE3YTYzMjVkNmJjZDQyMTU4MzgzYzdlN2VjM2FkZTIwYWVhYWEzYmM2YzkwePeV2g==: 00:20:53.387 16:13:23 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:53.387 16:13:23 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:53.387 16:13:23 -- host/auth.sh@49 -- # echo DHHC-1:02:MWY5YTE3YTYzMjVkNmJjZDQyMTU4MzgzYzdlN2VjM2FkZTIwYWVhYWEzYmM2YzkwePeV2g==: 00:20:53.387 16:13:23 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 3 00:20:53.387 16:13:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:53.387 16:13:23 -- host/auth.sh@68 -- # digest=sha512 00:20:53.387 16:13:23 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:53.387 16:13:23 -- host/auth.sh@68 -- # keyid=3 00:20:53.387 16:13:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:53.387 16:13:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:53.387 16:13:23 -- common/autotest_common.sh@10 -- # set +x 00:20:53.387 16:13:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:53.387 16:13:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:53.387 16:13:23 -- nvmf/common.sh@717 -- # local ip 00:20:53.387 16:13:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:53.387 16:13:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:53.387 16:13:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:53.387 16:13:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:53.387 16:13:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:53.387 16:13:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:53.387 16:13:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:53.387 16:13:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:53.387 16:13:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:53.387 16:13:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:53.387 16:13:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:53.387 16:13:23 -- common/autotest_common.sh@10 -- # set +x 00:20:53.645 nvme0n1 00:20:53.645 16:13:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:53.645 16:13:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:53.645 16:13:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:53.645 16:13:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:53.645 16:13:23 -- common/autotest_common.sh@10 -- # set +x 00:20:53.645 16:13:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:53.645 16:13:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.645 16:13:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:53.645 16:13:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:53.645 16:13:23 -- common/autotest_common.sh@10 -- # set +x 00:20:53.645 16:13:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:53.645 16:13:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:53.645 16:13:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:20:53.645 16:13:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:53.645 16:13:23 -- host/auth.sh@44 -- # digest=sha512 00:20:53.645 16:13:23 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:53.645 16:13:23 -- host/auth.sh@44 -- # keyid=4 00:20:53.645 16:13:23 -- host/auth.sh@45 -- # key=DHHC-1:03:MzE2NWE1NmQzNTFiY2Q4ZjljZDViMDk0NThkMjQ1NjIzZDZhZjNlZjRlZWFhYzc3MzgxMzRiNTE3ZDFjNzgxN9CT5iI=: 00:20:53.645 16:13:23 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:53.645 16:13:23 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:53.645 16:13:23 -- host/auth.sh@49 -- # echo DHHC-1:03:MzE2NWE1NmQzNTFiY2Q4ZjljZDViMDk0NThkMjQ1NjIzZDZhZjNlZjRlZWFhYzc3MzgxMzRiNTE3ZDFjNzgxN9CT5iI=: 00:20:53.645 16:13:23 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 4 00:20:53.645 16:13:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:53.645 16:13:23 -- host/auth.sh@68 -- # digest=sha512 00:20:53.645 16:13:23 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:53.645 16:13:23 -- host/auth.sh@68 -- # keyid=4 00:20:53.645 16:13:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:53.645 16:13:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:53.645 16:13:23 -- common/autotest_common.sh@10 -- # set +x 00:20:53.645 16:13:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:53.645 16:13:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:53.645 16:13:23 -- nvmf/common.sh@717 -- # local ip 00:20:53.645 16:13:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:53.645 16:13:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:53.645 16:13:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:53.645 16:13:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:53.645 16:13:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:53.645 16:13:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:53.645 16:13:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:53.645 16:13:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:53.645 16:13:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:53.645 16:13:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:53.645 16:13:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:53.645 16:13:23 -- common/autotest_common.sh@10 -- # set +x 00:20:53.645 nvme0n1 00:20:53.646 16:13:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:53.646 16:13:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:53.646 16:13:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:53.646 16:13:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:53.646 16:13:23 -- common/autotest_common.sh@10 -- # set +x 00:20:53.646 16:13:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:53.646 16:13:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.646 16:13:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:53.646 16:13:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:53.646 16:13:23 -- common/autotest_common.sh@10 -- # set +x 00:20:53.646 16:13:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:53.646 16:13:23 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:53.646 16:13:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:53.646 16:13:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:20:53.646 16:13:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:53.646 16:13:23 -- host/auth.sh@44 -- # digest=sha512 00:20:53.646 16:13:23 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:53.646 16:13:23 -- host/auth.sh@44 -- # keyid=0 00:20:53.646 16:13:23 -- host/auth.sh@45 -- # key=DHHC-1:00:OWEyZjQyZjIwNjc4ODMyYWE1MzEyODBhM2Q1YzdjNjLzphD/: 00:20:53.646 16:13:23 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:53.646 16:13:23 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:53.646 16:13:23 -- host/auth.sh@49 -- # echo DHHC-1:00:OWEyZjQyZjIwNjc4ODMyYWE1MzEyODBhM2Q1YzdjNjLzphD/: 00:20:53.646 16:13:23 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 0 00:20:53.646 16:13:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:53.646 16:13:23 -- host/auth.sh@68 -- # digest=sha512 00:20:53.646 16:13:23 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:53.646 16:13:23 -- host/auth.sh@68 -- # keyid=0 00:20:53.646 16:13:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:53.646 16:13:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:53.646 16:13:23 -- common/autotest_common.sh@10 -- # set +x 00:20:53.646 16:13:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:53.646 16:13:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:53.646 16:13:23 -- nvmf/common.sh@717 -- # local ip 00:20:53.646 16:13:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:53.646 16:13:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:53.646 16:13:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:53.646 16:13:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:53.646 16:13:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:53.646 16:13:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:53.905 16:13:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:53.905 16:13:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:53.905 16:13:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:53.905 16:13:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:53.905 16:13:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:53.905 16:13:23 -- common/autotest_common.sh@10 -- # set +x 00:20:53.905 nvme0n1 00:20:53.905 16:13:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:53.905 16:13:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:53.905 16:13:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:53.905 16:13:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:53.905 16:13:23 -- common/autotest_common.sh@10 -- # set +x 00:20:53.905 16:13:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:53.905 16:13:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.905 16:13:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:53.905 16:13:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:53.905 16:13:23 -- common/autotest_common.sh@10 -- # set +x 00:20:53.905 16:13:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:53.905 16:13:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:53.905 16:13:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:20:53.905 16:13:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:53.905 16:13:23 -- host/auth.sh@44 -- # digest=sha512 00:20:53.905 16:13:23 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:53.905 16:13:23 -- host/auth.sh@44 -- # keyid=1 00:20:53.905 16:13:23 -- host/auth.sh@45 -- # key=DHHC-1:00:YmJmZjA1NThmZjIwYWI5ZmIyZmM1OWRkYzQ5NzI3OGMwZGI4YWM3MzU4ODgwYzgz9M+1tw==: 00:20:53.905 16:13:23 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:53.905 16:13:23 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:53.905 16:13:23 -- host/auth.sh@49 -- # echo DHHC-1:00:YmJmZjA1NThmZjIwYWI5ZmIyZmM1OWRkYzQ5NzI3OGMwZGI4YWM3MzU4ODgwYzgz9M+1tw==: 00:20:53.905 16:13:23 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 1 00:20:53.905 16:13:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:53.905 16:13:23 -- host/auth.sh@68 -- # digest=sha512 00:20:53.905 16:13:23 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:53.905 16:13:23 -- host/auth.sh@68 -- # keyid=1 00:20:53.905 16:13:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:53.905 16:13:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:53.905 16:13:23 -- common/autotest_common.sh@10 -- # set +x 00:20:53.905 16:13:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:53.905 16:13:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:53.905 16:13:23 -- nvmf/common.sh@717 -- # local ip 00:20:53.905 16:13:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:53.905 16:13:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:53.905 16:13:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:53.905 16:13:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:53.905 16:13:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:53.905 16:13:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:53.905 16:13:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:53.905 16:13:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:53.905 16:13:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:53.905 16:13:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:53.905 16:13:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:53.905 16:13:23 -- common/autotest_common.sh@10 -- # set +x 00:20:54.164 nvme0n1 00:20:54.164 16:13:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:54.164 16:13:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:54.164 16:13:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:54.164 16:13:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:54.164 16:13:23 -- common/autotest_common.sh@10 -- # set +x 00:20:54.164 16:13:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:54.164 16:13:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.164 16:13:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:54.164 16:13:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:54.164 16:13:23 -- common/autotest_common.sh@10 -- # set +x 00:20:54.164 16:13:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:54.164 16:13:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:54.164 16:13:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:20:54.164 16:13:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:54.164 16:13:24 -- host/auth.sh@44 -- # digest=sha512 00:20:54.164 16:13:24 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:54.164 16:13:24 -- host/auth.sh@44 -- # keyid=2 00:20:54.164 16:13:24 -- host/auth.sh@45 -- # key=DHHC-1:01:YTRhOTBhNzU5Zjg3MzZmMGVlMDNiNTZjYWY0ODlkMjN7NoLX: 00:20:54.164 16:13:24 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:54.164 16:13:24 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:54.164 16:13:24 -- host/auth.sh@49 -- # echo DHHC-1:01:YTRhOTBhNzU5Zjg3MzZmMGVlMDNiNTZjYWY0ODlkMjN7NoLX: 00:20:54.164 16:13:24 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 2 00:20:54.164 16:13:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:54.164 16:13:24 -- host/auth.sh@68 -- # digest=sha512 00:20:54.164 16:13:24 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:54.164 16:13:24 -- host/auth.sh@68 -- # keyid=2 00:20:54.164 16:13:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:54.164 16:13:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:54.164 16:13:24 -- common/autotest_common.sh@10 -- # set +x 00:20:54.164 16:13:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:54.164 16:13:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:54.164 16:13:24 -- nvmf/common.sh@717 -- # local ip 00:20:54.164 16:13:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:54.164 16:13:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:54.164 16:13:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:54.164 16:13:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:54.164 16:13:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:54.164 16:13:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:54.164 16:13:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:54.164 16:13:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:54.164 16:13:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:54.164 16:13:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:54.164 16:13:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:54.164 16:13:24 -- common/autotest_common.sh@10 -- # set +x 00:20:54.423 nvme0n1 00:20:54.423 16:13:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:54.423 16:13:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:54.423 16:13:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:54.423 16:13:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:54.423 16:13:24 -- common/autotest_common.sh@10 -- # set +x 00:20:54.423 16:13:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:54.423 16:13:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.423 16:13:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:54.423 16:13:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:54.423 16:13:24 -- common/autotest_common.sh@10 -- # set +x 00:20:54.423 16:13:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:54.423 16:13:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:54.423 16:13:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:20:54.423 16:13:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:54.423 16:13:24 -- host/auth.sh@44 -- # digest=sha512 00:20:54.423 16:13:24 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:54.423 16:13:24 -- host/auth.sh@44 -- # keyid=3 00:20:54.423 16:13:24 -- host/auth.sh@45 -- # key=DHHC-1:02:MWY5YTE3YTYzMjVkNmJjZDQyMTU4MzgzYzdlN2VjM2FkZTIwYWVhYWEzYmM2YzkwePeV2g==: 00:20:54.423 16:13:24 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:54.423 16:13:24 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:54.423 16:13:24 -- host/auth.sh@49 -- # echo DHHC-1:02:MWY5YTE3YTYzMjVkNmJjZDQyMTU4MzgzYzdlN2VjM2FkZTIwYWVhYWEzYmM2YzkwePeV2g==: 00:20:54.423 16:13:24 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 3 00:20:54.423 16:13:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:54.423 16:13:24 -- host/auth.sh@68 -- # digest=sha512 00:20:54.423 16:13:24 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:54.423 16:13:24 -- host/auth.sh@68 -- # keyid=3 00:20:54.423 16:13:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:54.423 16:13:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:54.423 16:13:24 -- common/autotest_common.sh@10 -- # set +x 00:20:54.423 16:13:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:54.423 16:13:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:54.423 16:13:24 -- nvmf/common.sh@717 -- # local ip 00:20:54.423 16:13:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:54.423 16:13:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:54.423 16:13:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:54.423 16:13:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:54.423 16:13:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:54.423 16:13:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:54.423 16:13:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:54.423 16:13:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:54.423 16:13:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:54.423 16:13:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:54.423 16:13:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:54.423 16:13:24 -- common/autotest_common.sh@10 -- # set +x 00:20:54.423 nvme0n1 00:20:54.423 16:13:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:54.423 16:13:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:54.423 16:13:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:54.423 16:13:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:54.423 16:13:24 -- common/autotest_common.sh@10 -- # set +x 00:20:54.423 16:13:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:54.680 16:13:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.680 16:13:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:54.680 16:13:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:54.680 16:13:24 -- common/autotest_common.sh@10 -- # set +x 00:20:54.680 16:13:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:54.680 16:13:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:54.680 16:13:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:20:54.680 16:13:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:54.680 16:13:24 -- host/auth.sh@44 -- # digest=sha512 00:20:54.680 16:13:24 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:54.680 16:13:24 -- host/auth.sh@44 -- # keyid=4 00:20:54.680 16:13:24 -- host/auth.sh@45 -- # key=DHHC-1:03:MzE2NWE1NmQzNTFiY2Q4ZjljZDViMDk0NThkMjQ1NjIzZDZhZjNlZjRlZWFhYzc3MzgxMzRiNTE3ZDFjNzgxN9CT5iI=: 00:20:54.680 16:13:24 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:54.680 16:13:24 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:54.680 16:13:24 -- host/auth.sh@49 -- # echo DHHC-1:03:MzE2NWE1NmQzNTFiY2Q4ZjljZDViMDk0NThkMjQ1NjIzZDZhZjNlZjRlZWFhYzc3MzgxMzRiNTE3ZDFjNzgxN9CT5iI=: 00:20:54.680 16:13:24 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 4 00:20:54.680 16:13:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:54.680 16:13:24 -- host/auth.sh@68 -- # digest=sha512 00:20:54.680 16:13:24 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:54.680 16:13:24 -- host/auth.sh@68 -- # keyid=4 00:20:54.680 16:13:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:54.680 16:13:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:54.680 16:13:24 -- common/autotest_common.sh@10 -- # set +x 00:20:54.680 16:13:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:54.680 16:13:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:54.680 16:13:24 -- nvmf/common.sh@717 -- # local ip 00:20:54.680 16:13:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:54.680 16:13:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:54.680 16:13:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:54.680 16:13:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:54.680 16:13:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:54.680 16:13:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:54.680 16:13:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:54.680 16:13:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:54.680 16:13:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:54.680 16:13:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:54.680 16:13:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:54.680 16:13:24 -- common/autotest_common.sh@10 -- # set +x 00:20:54.680 nvme0n1 00:20:54.680 16:13:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:54.680 16:13:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:54.680 16:13:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:54.680 16:13:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:54.680 16:13:24 -- common/autotest_common.sh@10 -- # set +x 00:20:54.680 16:13:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:54.680 16:13:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.680 16:13:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:54.680 16:13:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:54.680 16:13:24 -- common/autotest_common.sh@10 -- # set +x 00:20:54.680 16:13:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:54.680 16:13:24 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:54.680 16:13:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:54.680 16:13:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:20:54.680 16:13:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:54.680 16:13:24 -- host/auth.sh@44 -- # digest=sha512 00:20:54.680 16:13:24 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:54.680 16:13:24 -- host/auth.sh@44 -- # keyid=0 00:20:54.680 16:13:24 -- host/auth.sh@45 -- # key=DHHC-1:00:OWEyZjQyZjIwNjc4ODMyYWE1MzEyODBhM2Q1YzdjNjLzphD/: 00:20:54.680 16:13:24 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:54.680 16:13:24 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:54.680 16:13:24 -- host/auth.sh@49 -- # echo DHHC-1:00:OWEyZjQyZjIwNjc4ODMyYWE1MzEyODBhM2Q1YzdjNjLzphD/: 00:20:54.680 16:13:24 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 0 00:20:54.680 16:13:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:54.680 16:13:24 -- host/auth.sh@68 -- # digest=sha512 00:20:54.680 16:13:24 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:54.680 16:13:24 -- host/auth.sh@68 -- # keyid=0 00:20:54.681 16:13:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:54.681 16:13:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:54.681 16:13:24 -- common/autotest_common.sh@10 -- # set +x 00:20:54.936 16:13:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:54.936 16:13:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:54.936 16:13:24 -- nvmf/common.sh@717 -- # local ip 00:20:54.936 16:13:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:54.936 16:13:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:54.936 16:13:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:54.936 16:13:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:54.936 16:13:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:54.936 16:13:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:54.936 16:13:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:54.936 16:13:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:54.936 16:13:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:54.936 16:13:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:54.936 16:13:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:54.936 16:13:24 -- common/autotest_common.sh@10 -- # set +x 00:20:54.936 nvme0n1 00:20:54.936 16:13:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:54.936 16:13:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:54.936 16:13:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:54.936 16:13:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:54.936 16:13:24 -- common/autotest_common.sh@10 -- # set +x 00:20:54.936 16:13:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:54.936 16:13:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.936 16:13:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:54.936 16:13:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:54.936 16:13:24 -- common/autotest_common.sh@10 -- # set +x 00:20:55.192 16:13:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:55.192 16:13:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:55.192 16:13:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:20:55.192 16:13:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:55.192 16:13:24 -- host/auth.sh@44 -- # digest=sha512 00:20:55.192 16:13:24 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:55.192 16:13:24 -- host/auth.sh@44 -- # keyid=1 00:20:55.192 16:13:24 -- host/auth.sh@45 -- # key=DHHC-1:00:YmJmZjA1NThmZjIwYWI5ZmIyZmM1OWRkYzQ5NzI3OGMwZGI4YWM3MzU4ODgwYzgz9M+1tw==: 00:20:55.192 16:13:24 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:55.192 16:13:24 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:55.192 16:13:24 -- host/auth.sh@49 -- # echo DHHC-1:00:YmJmZjA1NThmZjIwYWI5ZmIyZmM1OWRkYzQ5NzI3OGMwZGI4YWM3MzU4ODgwYzgz9M+1tw==: 00:20:55.192 16:13:24 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 1 00:20:55.192 16:13:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:55.192 16:13:24 -- host/auth.sh@68 -- # digest=sha512 00:20:55.192 16:13:24 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:55.192 16:13:24 -- host/auth.sh@68 -- # keyid=1 00:20:55.192 16:13:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:55.192 16:13:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:55.192 16:13:24 -- common/autotest_common.sh@10 -- # set +x 00:20:55.192 16:13:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:55.192 16:13:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:55.192 16:13:24 -- nvmf/common.sh@717 -- # local ip 00:20:55.192 16:13:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:55.192 16:13:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:55.192 16:13:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:55.192 16:13:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:55.192 16:13:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:55.192 16:13:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:55.192 16:13:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:55.192 16:13:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:55.192 16:13:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:55.192 16:13:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:55.192 16:13:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:55.192 16:13:24 -- common/autotest_common.sh@10 -- # set +x 00:20:55.192 nvme0n1 00:20:55.192 16:13:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:55.192 16:13:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:55.192 16:13:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:55.192 16:13:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:55.192 16:13:25 -- common/autotest_common.sh@10 -- # set +x 00:20:55.192 16:13:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:55.449 16:13:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.449 16:13:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:55.449 16:13:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:55.449 16:13:25 -- common/autotest_common.sh@10 -- # set +x 00:20:55.449 16:13:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:55.449 16:13:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:55.449 16:13:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:20:55.449 16:13:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:55.449 16:13:25 -- host/auth.sh@44 -- # digest=sha512 00:20:55.449 16:13:25 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:55.449 16:13:25 -- host/auth.sh@44 -- # keyid=2 00:20:55.449 16:13:25 -- host/auth.sh@45 -- # key=DHHC-1:01:YTRhOTBhNzU5Zjg3MzZmMGVlMDNiNTZjYWY0ODlkMjN7NoLX: 00:20:55.449 16:13:25 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:55.449 16:13:25 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:55.449 16:13:25 -- host/auth.sh@49 -- # echo DHHC-1:01:YTRhOTBhNzU5Zjg3MzZmMGVlMDNiNTZjYWY0ODlkMjN7NoLX: 00:20:55.449 16:13:25 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 2 00:20:55.449 16:13:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:55.449 16:13:25 -- host/auth.sh@68 -- # digest=sha512 00:20:55.449 16:13:25 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:55.449 16:13:25 -- host/auth.sh@68 -- # keyid=2 00:20:55.449 16:13:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:55.449 16:13:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:55.449 16:13:25 -- common/autotest_common.sh@10 -- # set +x 00:20:55.449 16:13:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:55.449 16:13:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:55.449 16:13:25 -- nvmf/common.sh@717 -- # local ip 00:20:55.449 16:13:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:55.449 16:13:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:55.449 16:13:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:55.449 16:13:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:55.449 16:13:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:55.449 16:13:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:55.449 16:13:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:55.449 16:13:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:55.449 16:13:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:55.449 16:13:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:55.449 16:13:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:55.449 16:13:25 -- common/autotest_common.sh@10 -- # set +x 00:20:55.449 nvme0n1 00:20:55.449 16:13:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:55.449 16:13:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:55.449 16:13:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:55.449 16:13:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:55.449 16:13:25 -- common/autotest_common.sh@10 -- # set +x 00:20:55.449 16:13:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:55.707 16:13:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.707 16:13:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:55.707 16:13:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:55.707 16:13:25 -- common/autotest_common.sh@10 -- # set +x 00:20:55.707 16:13:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:55.707 16:13:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:55.707 16:13:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:20:55.707 16:13:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:55.707 16:13:25 -- host/auth.sh@44 -- # digest=sha512 00:20:55.707 16:13:25 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:55.707 16:13:25 -- host/auth.sh@44 -- # keyid=3 00:20:55.707 16:13:25 -- host/auth.sh@45 -- # key=DHHC-1:02:MWY5YTE3YTYzMjVkNmJjZDQyMTU4MzgzYzdlN2VjM2FkZTIwYWVhYWEzYmM2YzkwePeV2g==: 00:20:55.707 16:13:25 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:55.707 16:13:25 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:55.707 16:13:25 -- host/auth.sh@49 -- # echo DHHC-1:02:MWY5YTE3YTYzMjVkNmJjZDQyMTU4MzgzYzdlN2VjM2FkZTIwYWVhYWEzYmM2YzkwePeV2g==: 00:20:55.707 16:13:25 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 3 00:20:55.707 16:13:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:55.707 16:13:25 -- host/auth.sh@68 -- # digest=sha512 00:20:55.707 16:13:25 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:55.707 16:13:25 -- host/auth.sh@68 -- # keyid=3 00:20:55.707 16:13:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:55.707 16:13:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:55.707 16:13:25 -- common/autotest_common.sh@10 -- # set +x 00:20:55.707 16:13:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:55.707 16:13:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:55.707 16:13:25 -- nvmf/common.sh@717 -- # local ip 00:20:55.707 16:13:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:55.707 16:13:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:55.707 16:13:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:55.707 16:13:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:55.707 16:13:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:55.707 16:13:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:55.707 16:13:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:55.707 16:13:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:55.707 16:13:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:55.707 16:13:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:55.707 16:13:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:55.707 16:13:25 -- common/autotest_common.sh@10 -- # set +x 00:20:55.707 nvme0n1 00:20:55.707 16:13:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:55.707 16:13:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:55.707 16:13:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:55.707 16:13:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:55.707 16:13:25 -- common/autotest_common.sh@10 -- # set +x 00:20:55.707 16:13:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:55.966 16:13:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.966 16:13:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:55.966 16:13:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:55.966 16:13:25 -- common/autotest_common.sh@10 -- # set +x 00:20:55.966 16:13:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:55.966 16:13:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:55.966 16:13:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:20:55.966 16:13:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:55.966 16:13:25 -- host/auth.sh@44 -- # digest=sha512 00:20:55.966 16:13:25 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:55.966 16:13:25 -- host/auth.sh@44 -- # keyid=4 00:20:55.966 16:13:25 -- host/auth.sh@45 -- # key=DHHC-1:03:MzE2NWE1NmQzNTFiY2Q4ZjljZDViMDk0NThkMjQ1NjIzZDZhZjNlZjRlZWFhYzc3MzgxMzRiNTE3ZDFjNzgxN9CT5iI=: 00:20:55.966 16:13:25 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:55.966 16:13:25 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:55.966 16:13:25 -- host/auth.sh@49 -- # echo DHHC-1:03:MzE2NWE1NmQzNTFiY2Q4ZjljZDViMDk0NThkMjQ1NjIzZDZhZjNlZjRlZWFhYzc3MzgxMzRiNTE3ZDFjNzgxN9CT5iI=: 00:20:55.966 16:13:25 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 4 00:20:55.966 16:13:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:55.966 16:13:25 -- host/auth.sh@68 -- # digest=sha512 00:20:55.966 16:13:25 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:55.966 16:13:25 -- host/auth.sh@68 -- # keyid=4 00:20:55.966 16:13:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:55.966 16:13:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:55.966 16:13:25 -- common/autotest_common.sh@10 -- # set +x 00:20:55.966 16:13:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:55.966 16:13:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:55.966 16:13:25 -- nvmf/common.sh@717 -- # local ip 00:20:55.966 16:13:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:55.966 16:13:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:55.966 16:13:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:55.966 16:13:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:55.966 16:13:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:55.966 16:13:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:55.966 16:13:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:55.966 16:13:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:55.966 16:13:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:55.966 16:13:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:55.966 16:13:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:55.966 16:13:25 -- common/autotest_common.sh@10 -- # set +x 00:20:56.234 nvme0n1 00:20:56.234 16:13:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:56.234 16:13:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:56.234 16:13:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:56.234 16:13:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:56.234 16:13:25 -- common/autotest_common.sh@10 -- # set +x 00:20:56.234 16:13:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:56.234 16:13:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.234 16:13:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:56.234 16:13:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:56.234 16:13:26 -- common/autotest_common.sh@10 -- # set +x 00:20:56.234 16:13:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:56.234 16:13:26 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:56.234 16:13:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:56.234 16:13:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:20:56.234 16:13:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:56.234 16:13:26 -- host/auth.sh@44 -- # digest=sha512 00:20:56.234 16:13:26 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:56.234 16:13:26 -- host/auth.sh@44 -- # keyid=0 00:20:56.234 16:13:26 -- host/auth.sh@45 -- # key=DHHC-1:00:OWEyZjQyZjIwNjc4ODMyYWE1MzEyODBhM2Q1YzdjNjLzphD/: 00:20:56.234 16:13:26 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:56.234 16:13:26 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:56.234 16:13:26 -- host/auth.sh@49 -- # echo DHHC-1:00:OWEyZjQyZjIwNjc4ODMyYWE1MzEyODBhM2Q1YzdjNjLzphD/: 00:20:56.234 16:13:26 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 0 00:20:56.234 16:13:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:56.234 16:13:26 -- host/auth.sh@68 -- # digest=sha512 00:20:56.234 16:13:26 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:56.234 16:13:26 -- host/auth.sh@68 -- # keyid=0 00:20:56.234 16:13:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:56.234 16:13:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:56.234 16:13:26 -- common/autotest_common.sh@10 -- # set +x 00:20:56.234 16:13:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:56.234 16:13:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:56.234 16:13:26 -- nvmf/common.sh@717 -- # local ip 00:20:56.234 16:13:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:56.234 16:13:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:56.234 16:13:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:56.234 16:13:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:56.234 16:13:26 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:56.234 16:13:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:56.234 16:13:26 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:56.234 16:13:26 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:56.234 16:13:26 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:56.234 16:13:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:56.234 16:13:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:56.234 16:13:26 -- common/autotest_common.sh@10 -- # set +x 00:20:56.491 nvme0n1 00:20:56.491 16:13:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:56.491 16:13:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:56.491 16:13:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:56.491 16:13:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:56.491 16:13:26 -- common/autotest_common.sh@10 -- # set +x 00:20:56.491 16:13:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:56.491 16:13:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.491 16:13:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:56.491 16:13:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:56.491 16:13:26 -- common/autotest_common.sh@10 -- # set +x 00:20:56.491 16:13:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:56.491 16:13:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:56.491 16:13:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:20:56.491 16:13:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:56.491 16:13:26 -- host/auth.sh@44 -- # digest=sha512 00:20:56.491 16:13:26 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:56.491 16:13:26 -- host/auth.sh@44 -- # keyid=1 00:20:56.491 16:13:26 -- host/auth.sh@45 -- # key=DHHC-1:00:YmJmZjA1NThmZjIwYWI5ZmIyZmM1OWRkYzQ5NzI3OGMwZGI4YWM3MzU4ODgwYzgz9M+1tw==: 00:20:56.491 16:13:26 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:56.491 16:13:26 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:56.491 16:13:26 -- host/auth.sh@49 -- # echo DHHC-1:00:YmJmZjA1NThmZjIwYWI5ZmIyZmM1OWRkYzQ5NzI3OGMwZGI4YWM3MzU4ODgwYzgz9M+1tw==: 00:20:56.491 16:13:26 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 1 00:20:56.491 16:13:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:56.491 16:13:26 -- host/auth.sh@68 -- # digest=sha512 00:20:56.491 16:13:26 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:56.491 16:13:26 -- host/auth.sh@68 -- # keyid=1 00:20:56.491 16:13:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:56.491 16:13:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:56.491 16:13:26 -- common/autotest_common.sh@10 -- # set +x 00:20:56.491 16:13:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:56.491 16:13:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:56.491 16:13:26 -- nvmf/common.sh@717 -- # local ip 00:20:56.491 16:13:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:56.491 16:13:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:56.491 16:13:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:56.491 16:13:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:56.491 16:13:26 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:56.491 16:13:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:56.492 16:13:26 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:56.492 16:13:26 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:56.492 16:13:26 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:56.492 16:13:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:56.492 16:13:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:56.492 16:13:26 -- common/autotest_common.sh@10 -- # set +x 00:20:57.057 nvme0n1 00:20:57.057 16:13:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:57.057 16:13:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:57.057 16:13:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:57.057 16:13:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:57.057 16:13:26 -- common/autotest_common.sh@10 -- # set +x 00:20:57.057 16:13:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:57.057 16:13:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.057 16:13:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:57.057 16:13:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:57.057 16:13:26 -- common/autotest_common.sh@10 -- # set +x 00:20:57.057 16:13:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:57.057 16:13:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:57.057 16:13:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:20:57.057 16:13:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:57.057 16:13:26 -- host/auth.sh@44 -- # digest=sha512 00:20:57.057 16:13:26 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:57.057 16:13:26 -- host/auth.sh@44 -- # keyid=2 00:20:57.057 16:13:26 -- host/auth.sh@45 -- # key=DHHC-1:01:YTRhOTBhNzU5Zjg3MzZmMGVlMDNiNTZjYWY0ODlkMjN7NoLX: 00:20:57.057 16:13:26 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:57.057 16:13:26 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:57.057 16:13:26 -- host/auth.sh@49 -- # echo DHHC-1:01:YTRhOTBhNzU5Zjg3MzZmMGVlMDNiNTZjYWY0ODlkMjN7NoLX: 00:20:57.057 16:13:26 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 2 00:20:57.057 16:13:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:57.057 16:13:26 -- host/auth.sh@68 -- # digest=sha512 00:20:57.057 16:13:26 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:57.057 16:13:26 -- host/auth.sh@68 -- # keyid=2 00:20:57.057 16:13:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:57.057 16:13:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:57.057 16:13:26 -- common/autotest_common.sh@10 -- # set +x 00:20:57.057 16:13:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:57.057 16:13:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:57.057 16:13:26 -- nvmf/common.sh@717 -- # local ip 00:20:57.057 16:13:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:57.057 16:13:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:57.057 16:13:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:57.057 16:13:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:57.057 16:13:26 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:57.057 16:13:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:57.057 16:13:26 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:57.057 16:13:26 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:57.057 16:13:26 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:57.057 16:13:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:57.057 16:13:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:57.057 16:13:26 -- common/autotest_common.sh@10 -- # set +x 00:20:57.315 nvme0n1 00:20:57.315 16:13:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:57.315 16:13:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:57.315 16:13:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:57.315 16:13:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:57.315 16:13:27 -- common/autotest_common.sh@10 -- # set +x 00:20:57.315 16:13:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:57.315 16:13:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.315 16:13:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:57.315 16:13:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:57.315 16:13:27 -- common/autotest_common.sh@10 -- # set +x 00:20:57.315 16:13:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:57.315 16:13:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:57.315 16:13:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:20:57.315 16:13:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:57.315 16:13:27 -- host/auth.sh@44 -- # digest=sha512 00:20:57.315 16:13:27 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:57.315 16:13:27 -- host/auth.sh@44 -- # keyid=3 00:20:57.315 16:13:27 -- host/auth.sh@45 -- # key=DHHC-1:02:MWY5YTE3YTYzMjVkNmJjZDQyMTU4MzgzYzdlN2VjM2FkZTIwYWVhYWEzYmM2YzkwePeV2g==: 00:20:57.315 16:13:27 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:57.315 16:13:27 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:57.315 16:13:27 -- host/auth.sh@49 -- # echo DHHC-1:02:MWY5YTE3YTYzMjVkNmJjZDQyMTU4MzgzYzdlN2VjM2FkZTIwYWVhYWEzYmM2YzkwePeV2g==: 00:20:57.315 16:13:27 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 3 00:20:57.315 16:13:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:57.315 16:13:27 -- host/auth.sh@68 -- # digest=sha512 00:20:57.315 16:13:27 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:57.315 16:13:27 -- host/auth.sh@68 -- # keyid=3 00:20:57.315 16:13:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:57.315 16:13:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:57.315 16:13:27 -- common/autotest_common.sh@10 -- # set +x 00:20:57.315 16:13:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:57.315 16:13:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:57.574 16:13:27 -- nvmf/common.sh@717 -- # local ip 00:20:57.574 16:13:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:57.574 16:13:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:57.574 16:13:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:57.574 16:13:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:57.574 16:13:27 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:57.574 16:13:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:57.574 16:13:27 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:57.574 16:13:27 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:57.574 16:13:27 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:57.574 16:13:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:57.574 16:13:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:57.574 16:13:27 -- common/autotest_common.sh@10 -- # set +x 00:20:57.832 nvme0n1 00:20:57.832 16:13:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:57.832 16:13:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:57.832 16:13:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:57.832 16:13:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:57.832 16:13:27 -- common/autotest_common.sh@10 -- # set +x 00:20:57.832 16:13:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:57.832 16:13:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.832 16:13:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:57.832 16:13:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:57.832 16:13:27 -- common/autotest_common.sh@10 -- # set +x 00:20:57.832 16:13:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:57.832 16:13:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:57.832 16:13:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:20:57.832 16:13:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:57.832 16:13:27 -- host/auth.sh@44 -- # digest=sha512 00:20:57.832 16:13:27 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:57.832 16:13:27 -- host/auth.sh@44 -- # keyid=4 00:20:57.832 16:13:27 -- host/auth.sh@45 -- # key=DHHC-1:03:MzE2NWE1NmQzNTFiY2Q4ZjljZDViMDk0NThkMjQ1NjIzZDZhZjNlZjRlZWFhYzc3MzgxMzRiNTE3ZDFjNzgxN9CT5iI=: 00:20:57.832 16:13:27 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:57.832 16:13:27 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:57.832 16:13:27 -- host/auth.sh@49 -- # echo DHHC-1:03:MzE2NWE1NmQzNTFiY2Q4ZjljZDViMDk0NThkMjQ1NjIzZDZhZjNlZjRlZWFhYzc3MzgxMzRiNTE3ZDFjNzgxN9CT5iI=: 00:20:57.832 16:13:27 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 4 00:20:57.832 16:13:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:57.832 16:13:27 -- host/auth.sh@68 -- # digest=sha512 00:20:57.832 16:13:27 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:57.832 16:13:27 -- host/auth.sh@68 -- # keyid=4 00:20:57.833 16:13:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:57.833 16:13:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:57.833 16:13:27 -- common/autotest_common.sh@10 -- # set +x 00:20:57.833 16:13:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:57.833 16:13:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:57.833 16:13:27 -- nvmf/common.sh@717 -- # local ip 00:20:57.833 16:13:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:57.833 16:13:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:57.833 16:13:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:57.833 16:13:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:57.833 16:13:27 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:57.833 16:13:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:57.833 16:13:27 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:57.833 16:13:27 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:57.833 16:13:27 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:57.833 16:13:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:57.833 16:13:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:57.833 16:13:27 -- common/autotest_common.sh@10 -- # set +x 00:20:58.091 nvme0n1 00:20:58.091 16:13:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:58.091 16:13:28 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:58.091 16:13:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:58.091 16:13:28 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:58.091 16:13:28 -- common/autotest_common.sh@10 -- # set +x 00:20:58.350 16:13:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:58.350 16:13:28 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.350 16:13:28 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:58.350 16:13:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:58.350 16:13:28 -- common/autotest_common.sh@10 -- # set +x 00:20:58.350 16:13:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:58.350 16:13:28 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:58.350 16:13:28 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:58.350 16:13:28 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:20:58.350 16:13:28 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:58.350 16:13:28 -- host/auth.sh@44 -- # digest=sha512 00:20:58.350 16:13:28 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:58.350 16:13:28 -- host/auth.sh@44 -- # keyid=0 00:20:58.350 16:13:28 -- host/auth.sh@45 -- # key=DHHC-1:00:OWEyZjQyZjIwNjc4ODMyYWE1MzEyODBhM2Q1YzdjNjLzphD/: 00:20:58.350 16:13:28 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:58.350 16:13:28 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:58.350 16:13:28 -- host/auth.sh@49 -- # echo DHHC-1:00:OWEyZjQyZjIwNjc4ODMyYWE1MzEyODBhM2Q1YzdjNjLzphD/: 00:20:58.350 16:13:28 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 0 00:20:58.350 16:13:28 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:58.350 16:13:28 -- host/auth.sh@68 -- # digest=sha512 00:20:58.350 16:13:28 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:58.350 16:13:28 -- host/auth.sh@68 -- # keyid=0 00:20:58.350 16:13:28 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:58.350 16:13:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:58.350 16:13:28 -- common/autotest_common.sh@10 -- # set +x 00:20:58.350 16:13:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:58.350 16:13:28 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:58.350 16:13:28 -- nvmf/common.sh@717 -- # local ip 00:20:58.350 16:13:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:58.350 16:13:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:58.350 16:13:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:58.350 16:13:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:58.350 16:13:28 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:58.350 16:13:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:58.350 16:13:28 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:58.350 16:13:28 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:58.350 16:13:28 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:58.350 16:13:28 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:58.350 16:13:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:58.350 16:13:28 -- common/autotest_common.sh@10 -- # set +x 00:20:58.917 nvme0n1 00:20:58.917 16:13:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:58.917 16:13:28 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:58.917 16:13:28 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:58.917 16:13:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:58.917 16:13:28 -- common/autotest_common.sh@10 -- # set +x 00:20:58.917 16:13:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:58.917 16:13:28 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.917 16:13:28 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:58.917 16:13:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:58.917 16:13:28 -- common/autotest_common.sh@10 -- # set +x 00:20:58.917 16:13:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:58.917 16:13:28 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:58.917 16:13:28 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:20:58.917 16:13:28 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:58.917 16:13:28 -- host/auth.sh@44 -- # digest=sha512 00:20:58.917 16:13:28 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:58.917 16:13:28 -- host/auth.sh@44 -- # keyid=1 00:20:58.917 16:13:28 -- host/auth.sh@45 -- # key=DHHC-1:00:YmJmZjA1NThmZjIwYWI5ZmIyZmM1OWRkYzQ5NzI3OGMwZGI4YWM3MzU4ODgwYzgz9M+1tw==: 00:20:58.917 16:13:28 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:58.917 16:13:28 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:58.917 16:13:28 -- host/auth.sh@49 -- # echo DHHC-1:00:YmJmZjA1NThmZjIwYWI5ZmIyZmM1OWRkYzQ5NzI3OGMwZGI4YWM3MzU4ODgwYzgz9M+1tw==: 00:20:58.917 16:13:28 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 1 00:20:58.917 16:13:28 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:58.917 16:13:28 -- host/auth.sh@68 -- # digest=sha512 00:20:58.917 16:13:28 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:58.917 16:13:28 -- host/auth.sh@68 -- # keyid=1 00:20:58.917 16:13:28 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:58.917 16:13:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:58.917 16:13:28 -- common/autotest_common.sh@10 -- # set +x 00:20:58.917 16:13:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:58.917 16:13:28 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:58.917 16:13:28 -- nvmf/common.sh@717 -- # local ip 00:20:58.917 16:13:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:58.917 16:13:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:58.917 16:13:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:58.917 16:13:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:58.917 16:13:28 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:58.917 16:13:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:58.917 16:13:28 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:58.917 16:13:28 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:58.917 16:13:28 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:58.917 16:13:28 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:58.917 16:13:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:58.917 16:13:28 -- common/autotest_common.sh@10 -- # set +x 00:20:59.520 nvme0n1 00:20:59.520 16:13:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:59.520 16:13:29 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:59.520 16:13:29 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:59.520 16:13:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:59.520 16:13:29 -- common/autotest_common.sh@10 -- # set +x 00:20:59.520 16:13:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:59.813 16:13:29 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.813 16:13:29 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:59.813 16:13:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:59.813 16:13:29 -- common/autotest_common.sh@10 -- # set +x 00:20:59.813 16:13:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:59.813 16:13:29 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:59.813 16:13:29 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:20:59.813 16:13:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:59.813 16:13:29 -- host/auth.sh@44 -- # digest=sha512 00:20:59.813 16:13:29 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:59.813 16:13:29 -- host/auth.sh@44 -- # keyid=2 00:20:59.813 16:13:29 -- host/auth.sh@45 -- # key=DHHC-1:01:YTRhOTBhNzU5Zjg3MzZmMGVlMDNiNTZjYWY0ODlkMjN7NoLX: 00:20:59.813 16:13:29 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:59.813 16:13:29 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:59.813 16:13:29 -- host/auth.sh@49 -- # echo DHHC-1:01:YTRhOTBhNzU5Zjg3MzZmMGVlMDNiNTZjYWY0ODlkMjN7NoLX: 00:20:59.813 16:13:29 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 2 00:20:59.813 16:13:29 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:59.813 16:13:29 -- host/auth.sh@68 -- # digest=sha512 00:20:59.813 16:13:29 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:59.813 16:13:29 -- host/auth.sh@68 -- # keyid=2 00:20:59.813 16:13:29 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:59.813 16:13:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:59.813 16:13:29 -- common/autotest_common.sh@10 -- # set +x 00:20:59.813 16:13:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:59.813 16:13:29 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:59.813 16:13:29 -- nvmf/common.sh@717 -- # local ip 00:20:59.813 16:13:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:59.813 16:13:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:59.813 16:13:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:59.813 16:13:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:59.813 16:13:29 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:59.813 16:13:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:59.813 16:13:29 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:59.813 16:13:29 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:59.813 16:13:29 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:59.813 16:13:29 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:59.813 16:13:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:59.813 16:13:29 -- common/autotest_common.sh@10 -- # set +x 00:21:00.378 nvme0n1 00:21:00.378 16:13:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:00.378 16:13:30 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:00.378 16:13:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:00.378 16:13:30 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:00.378 16:13:30 -- common/autotest_common.sh@10 -- # set +x 00:21:00.378 16:13:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:00.378 16:13:30 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.378 16:13:30 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:00.378 16:13:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:00.378 16:13:30 -- common/autotest_common.sh@10 -- # set +x 00:21:00.378 16:13:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:00.378 16:13:30 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:00.378 16:13:30 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:21:00.378 16:13:30 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:00.378 16:13:30 -- host/auth.sh@44 -- # digest=sha512 00:21:00.378 16:13:30 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:00.379 16:13:30 -- host/auth.sh@44 -- # keyid=3 00:21:00.379 16:13:30 -- host/auth.sh@45 -- # key=DHHC-1:02:MWY5YTE3YTYzMjVkNmJjZDQyMTU4MzgzYzdlN2VjM2FkZTIwYWVhYWEzYmM2YzkwePeV2g==: 00:21:00.379 16:13:30 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:21:00.379 16:13:30 -- host/auth.sh@48 -- # echo ffdhe8192 00:21:00.379 16:13:30 -- host/auth.sh@49 -- # echo DHHC-1:02:MWY5YTE3YTYzMjVkNmJjZDQyMTU4MzgzYzdlN2VjM2FkZTIwYWVhYWEzYmM2YzkwePeV2g==: 00:21:00.379 16:13:30 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 3 00:21:00.379 16:13:30 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:00.379 16:13:30 -- host/auth.sh@68 -- # digest=sha512 00:21:00.379 16:13:30 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:21:00.379 16:13:30 -- host/auth.sh@68 -- # keyid=3 00:21:00.379 16:13:30 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:00.379 16:13:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:00.379 16:13:30 -- common/autotest_common.sh@10 -- # set +x 00:21:00.379 16:13:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:00.379 16:13:30 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:00.379 16:13:30 -- nvmf/common.sh@717 -- # local ip 00:21:00.379 16:13:30 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:00.379 16:13:30 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:00.379 16:13:30 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:00.379 16:13:30 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:00.379 16:13:30 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:21:00.379 16:13:30 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:00.379 16:13:30 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:21:00.379 16:13:30 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:21:00.379 16:13:30 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:21:00.379 16:13:30 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:21:00.379 16:13:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:00.379 16:13:30 -- common/autotest_common.sh@10 -- # set +x 00:21:00.943 nvme0n1 00:21:00.943 16:13:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:00.943 16:13:30 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:00.943 16:13:30 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:00.943 16:13:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:00.943 16:13:30 -- common/autotest_common.sh@10 -- # set +x 00:21:00.943 16:13:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:00.943 16:13:30 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.943 16:13:30 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:00.943 16:13:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:00.943 16:13:30 -- common/autotest_common.sh@10 -- # set +x 00:21:00.943 16:13:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:00.943 16:13:30 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:21:00.943 16:13:30 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:21:00.943 16:13:30 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:00.943 16:13:30 -- host/auth.sh@44 -- # digest=sha512 00:21:00.943 16:13:30 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:00.943 16:13:30 -- host/auth.sh@44 -- # keyid=4 00:21:00.943 16:13:30 -- host/auth.sh@45 -- # key=DHHC-1:03:MzE2NWE1NmQzNTFiY2Q4ZjljZDViMDk0NThkMjQ1NjIzZDZhZjNlZjRlZWFhYzc3MzgxMzRiNTE3ZDFjNzgxN9CT5iI=: 00:21:00.943 16:13:30 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:21:00.943 16:13:30 -- host/auth.sh@48 -- # echo ffdhe8192 00:21:00.943 16:13:30 -- host/auth.sh@49 -- # echo DHHC-1:03:MzE2NWE1NmQzNTFiY2Q4ZjljZDViMDk0NThkMjQ1NjIzZDZhZjNlZjRlZWFhYzc3MzgxMzRiNTE3ZDFjNzgxN9CT5iI=: 00:21:00.943 16:13:30 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 4 00:21:00.943 16:13:30 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:21:00.943 16:13:30 -- host/auth.sh@68 -- # digest=sha512 00:21:00.943 16:13:30 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:21:00.943 16:13:30 -- host/auth.sh@68 -- # keyid=4 00:21:00.943 16:13:30 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:00.943 16:13:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:00.943 16:13:30 -- common/autotest_common.sh@10 -- # set +x 00:21:00.943 16:13:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:00.943 16:13:30 -- host/auth.sh@70 -- # get_main_ns_ip 00:21:00.943 16:13:30 -- nvmf/common.sh@717 -- # local ip 00:21:00.943 16:13:30 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:00.943 16:13:30 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:00.943 16:13:30 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:00.943 16:13:30 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:00.943 16:13:30 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:21:00.943 16:13:30 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:00.943 16:13:30 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:21:00.943 16:13:30 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:21:00.943 16:13:30 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:21:00.943 16:13:30 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:00.943 16:13:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:00.943 16:13:30 -- common/autotest_common.sh@10 -- # set +x 00:21:01.509 nvme0n1 00:21:01.509 16:13:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:01.509 16:13:31 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:21:01.509 16:13:31 -- host/auth.sh@73 -- # jq -r '.[].name' 00:21:01.509 16:13:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:01.509 16:13:31 -- common/autotest_common.sh@10 -- # set +x 00:21:01.509 16:13:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:01.767 16:13:31 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.767 16:13:31 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:01.767 16:13:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:01.767 16:13:31 -- common/autotest_common.sh@10 -- # set +x 00:21:01.767 16:13:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:01.767 16:13:31 -- host/auth.sh@117 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:01.767 16:13:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:21:01.767 16:13:31 -- host/auth.sh@44 -- # digest=sha256 00:21:01.767 16:13:31 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:01.767 16:13:31 -- host/auth.sh@44 -- # keyid=1 00:21:01.767 16:13:31 -- host/auth.sh@45 -- # key=DHHC-1:00:YmJmZjA1NThmZjIwYWI5ZmIyZmM1OWRkYzQ5NzI3OGMwZGI4YWM3MzU4ODgwYzgz9M+1tw==: 00:21:01.767 16:13:31 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:21:01.767 16:13:31 -- host/auth.sh@48 -- # echo ffdhe2048 00:21:01.767 16:13:31 -- host/auth.sh@49 -- # echo DHHC-1:00:YmJmZjA1NThmZjIwYWI5ZmIyZmM1OWRkYzQ5NzI3OGMwZGI4YWM3MzU4ODgwYzgz9M+1tw==: 00:21:01.767 16:13:31 -- host/auth.sh@118 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:01.767 16:13:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:01.767 16:13:31 -- common/autotest_common.sh@10 -- # set +x 00:21:01.767 16:13:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:01.767 16:13:31 -- host/auth.sh@119 -- # get_main_ns_ip 00:21:01.767 16:13:31 -- nvmf/common.sh@717 -- # local ip 00:21:01.767 16:13:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:01.767 16:13:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:01.767 16:13:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:01.767 16:13:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:01.767 16:13:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:21:01.767 16:13:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:01.767 16:13:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:21:01.767 16:13:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:21:01.767 16:13:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:21:01.767 16:13:31 -- host/auth.sh@119 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:21:01.767 16:13:31 -- common/autotest_common.sh@638 -- # local es=0 00:21:01.767 16:13:31 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:21:01.767 16:13:31 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:21:01.767 16:13:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:01.767 16:13:31 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:21:01.767 16:13:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:01.767 16:13:31 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:21:01.767 16:13:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:01.767 16:13:31 -- common/autotest_common.sh@10 -- # set +x 00:21:01.767 request: 00:21:01.767 { 00:21:01.767 "name": "nvme0", 00:21:01.767 "trtype": "tcp", 00:21:01.767 "traddr": "10.0.0.1", 00:21:01.767 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:21:01.767 "adrfam": "ipv4", 00:21:01.767 "trsvcid": "4420", 00:21:01.767 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:21:01.767 "method": "bdev_nvme_attach_controller", 00:21:01.767 "req_id": 1 00:21:01.767 } 00:21:01.767 Got JSON-RPC error response 00:21:01.767 response: 00:21:01.767 { 00:21:01.767 "code": -32602, 00:21:01.767 "message": "Invalid parameters" 00:21:01.767 } 00:21:01.767 16:13:31 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:21:01.767 16:13:31 -- common/autotest_common.sh@641 -- # es=1 00:21:01.767 16:13:31 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:01.767 16:13:31 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:01.767 16:13:31 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:01.767 16:13:31 -- host/auth.sh@121 -- # rpc_cmd bdev_nvme_get_controllers 00:21:01.767 16:13:31 -- host/auth.sh@121 -- # jq length 00:21:01.767 16:13:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:01.767 16:13:31 -- common/autotest_common.sh@10 -- # set +x 00:21:01.767 16:13:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:01.767 16:13:31 -- host/auth.sh@121 -- # (( 0 == 0 )) 00:21:01.767 16:13:31 -- host/auth.sh@124 -- # get_main_ns_ip 00:21:01.767 16:13:31 -- nvmf/common.sh@717 -- # local ip 00:21:01.767 16:13:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:01.767 16:13:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:01.767 16:13:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:01.767 16:13:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:01.767 16:13:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:21:01.767 16:13:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:01.767 16:13:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:21:01.767 16:13:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:21:01.767 16:13:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:21:01.767 16:13:31 -- host/auth.sh@124 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:01.767 16:13:31 -- common/autotest_common.sh@638 -- # local es=0 00:21:01.767 16:13:31 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:01.767 16:13:31 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:21:01.767 16:13:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:01.767 16:13:31 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:21:01.767 16:13:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:01.767 16:13:31 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:01.767 16:13:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:01.767 16:13:31 -- common/autotest_common.sh@10 -- # set +x 00:21:01.767 request: 00:21:01.767 { 00:21:01.767 "name": "nvme0", 00:21:01.767 "trtype": "tcp", 00:21:01.767 "traddr": "10.0.0.1", 00:21:01.767 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:21:01.767 "adrfam": "ipv4", 00:21:01.767 "trsvcid": "4420", 00:21:01.767 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:21:01.767 "dhchap_key": "key2", 00:21:01.767 "method": "bdev_nvme_attach_controller", 00:21:01.767 "req_id": 1 00:21:01.767 } 00:21:01.767 Got JSON-RPC error response 00:21:01.767 response: 00:21:01.767 { 00:21:01.767 "code": -32602, 00:21:01.767 "message": "Invalid parameters" 00:21:01.767 } 00:21:01.767 16:13:31 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:21:01.767 16:13:31 -- common/autotest_common.sh@641 -- # es=1 00:21:01.767 16:13:31 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:01.767 16:13:31 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:01.767 16:13:31 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:01.767 16:13:31 -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:21:01.767 16:13:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:01.767 16:13:31 -- host/auth.sh@127 -- # jq length 00:21:01.767 16:13:31 -- common/autotest_common.sh@10 -- # set +x 00:21:01.767 16:13:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:01.767 16:13:31 -- host/auth.sh@127 -- # (( 0 == 0 )) 00:21:01.767 16:13:31 -- host/auth.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:21:01.767 16:13:31 -- host/auth.sh@130 -- # cleanup 00:21:01.767 16:13:31 -- host/auth.sh@24 -- # nvmftestfini 00:21:01.767 16:13:31 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:01.767 16:13:31 -- nvmf/common.sh@117 -- # sync 00:21:02.025 16:13:31 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:02.025 16:13:31 -- nvmf/common.sh@120 -- # set +e 00:21:02.025 16:13:31 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:02.025 16:13:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:02.025 rmmod nvme_tcp 00:21:02.025 rmmod nvme_fabrics 00:21:02.025 16:13:31 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:02.025 16:13:31 -- nvmf/common.sh@124 -- # set -e 00:21:02.025 16:13:31 -- nvmf/common.sh@125 -- # return 0 00:21:02.025 16:13:31 -- nvmf/common.sh@478 -- # '[' -n 89619 ']' 00:21:02.025 16:13:31 -- nvmf/common.sh@479 -- # killprocess 89619 00:21:02.025 16:13:31 -- common/autotest_common.sh@936 -- # '[' -z 89619 ']' 00:21:02.025 16:13:31 -- common/autotest_common.sh@940 -- # kill -0 89619 00:21:02.025 16:13:31 -- common/autotest_common.sh@941 -- # uname 00:21:02.025 16:13:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:02.025 16:13:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89619 00:21:02.025 16:13:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:02.025 16:13:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:02.025 16:13:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89619' 00:21:02.025 killing process with pid 89619 00:21:02.025 16:13:31 -- common/autotest_common.sh@955 -- # kill 89619 00:21:02.025 16:13:31 -- common/autotest_common.sh@960 -- # wait 89619 00:21:02.282 16:13:31 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:02.282 16:13:32 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:02.283 16:13:32 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:02.283 16:13:32 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:02.283 16:13:32 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:02.283 16:13:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:02.283 16:13:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:02.283 16:13:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:02.283 16:13:32 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:02.283 16:13:32 -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:21:02.283 16:13:32 -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:21:02.283 16:13:32 -- host/auth.sh@27 -- # clean_kernel_target 00:21:02.283 16:13:32 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:21:02.283 16:13:32 -- nvmf/common.sh@675 -- # echo 0 00:21:02.283 16:13:32 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:02.283 16:13:32 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:21:02.283 16:13:32 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:02.283 16:13:32 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:02.283 16:13:32 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:21:02.283 16:13:32 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:21:02.283 16:13:32 -- nvmf/common.sh@687 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:03.214 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:03.214 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:03.215 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:03.215 16:13:33 -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.lN8 /tmp/spdk.key-null.E3D /tmp/spdk.key-sha256.KPt /tmp/spdk.key-sha384.Uh9 /tmp/spdk.key-sha512.rG6 /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:21:03.215 16:13:33 -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:03.782 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:03.782 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:03.782 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:03.782 00:21:03.782 real 0m37.507s 00:21:03.782 user 0m33.055s 00:21:03.782 sys 0m4.057s 00:21:03.782 16:13:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:03.782 ************************************ 00:21:03.782 END TEST nvmf_auth 00:21:03.782 ************************************ 00:21:03.782 16:13:33 -- common/autotest_common.sh@10 -- # set +x 00:21:03.782 16:13:33 -- nvmf/nvmf.sh@104 -- # [[ tcp == \t\c\p ]] 00:21:03.782 16:13:33 -- nvmf/nvmf.sh@105 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:21:03.782 16:13:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:03.782 16:13:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:03.782 16:13:33 -- common/autotest_common.sh@10 -- # set +x 00:21:03.782 ************************************ 00:21:03.782 START TEST nvmf_digest 00:21:03.782 ************************************ 00:21:03.783 16:13:33 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:21:04.041 * Looking for test storage... 00:21:04.041 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:04.041 16:13:33 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:04.041 16:13:33 -- nvmf/common.sh@7 -- # uname -s 00:21:04.041 16:13:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:04.041 16:13:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:04.041 16:13:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:04.041 16:13:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:04.041 16:13:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:04.041 16:13:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:04.041 16:13:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:04.041 16:13:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:04.041 16:13:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:04.041 16:13:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:04.041 16:13:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5cf30adc-e0ee-4255-9853-178d7d30983a 00:21:04.041 16:13:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=5cf30adc-e0ee-4255-9853-178d7d30983a 00:21:04.041 16:13:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:04.041 16:13:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:04.041 16:13:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:04.041 16:13:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:04.041 16:13:33 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:04.041 16:13:33 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:04.041 16:13:33 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:04.041 16:13:33 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:04.042 16:13:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.042 16:13:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.042 16:13:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.042 16:13:33 -- paths/export.sh@5 -- # export PATH 00:21:04.042 16:13:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.042 16:13:33 -- nvmf/common.sh@47 -- # : 0 00:21:04.042 16:13:33 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:04.042 16:13:33 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:04.042 16:13:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:04.042 16:13:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:04.042 16:13:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:04.042 16:13:33 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:04.042 16:13:33 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:04.042 16:13:33 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:04.042 16:13:33 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:21:04.042 16:13:33 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:21:04.042 16:13:33 -- host/digest.sh@16 -- # runtime=2 00:21:04.042 16:13:33 -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:21:04.042 16:13:33 -- host/digest.sh@138 -- # nvmftestinit 00:21:04.042 16:13:33 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:04.042 16:13:33 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:04.042 16:13:33 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:04.042 16:13:33 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:04.042 16:13:33 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:04.042 16:13:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:04.042 16:13:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:04.042 16:13:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:04.042 16:13:33 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:21:04.042 16:13:33 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:21:04.042 16:13:33 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:21:04.042 16:13:33 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:21:04.042 16:13:33 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:21:04.042 16:13:33 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:21:04.042 16:13:33 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:04.042 16:13:33 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:04.042 16:13:33 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:04.042 16:13:33 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:04.042 16:13:33 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:04.042 16:13:33 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:04.042 16:13:33 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:04.042 16:13:33 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:04.042 16:13:33 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:04.042 16:13:33 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:04.042 16:13:33 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:04.042 16:13:33 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:04.042 16:13:33 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:04.042 16:13:33 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:04.042 Cannot find device "nvmf_tgt_br" 00:21:04.042 16:13:33 -- nvmf/common.sh@155 -- # true 00:21:04.042 16:13:33 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:04.042 Cannot find device "nvmf_tgt_br2" 00:21:04.042 16:13:33 -- nvmf/common.sh@156 -- # true 00:21:04.042 16:13:33 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:04.042 16:13:33 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:04.042 Cannot find device "nvmf_tgt_br" 00:21:04.042 16:13:33 -- nvmf/common.sh@158 -- # true 00:21:04.042 16:13:33 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:04.042 Cannot find device "nvmf_tgt_br2" 00:21:04.042 16:13:33 -- nvmf/common.sh@159 -- # true 00:21:04.042 16:13:33 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:04.042 16:13:33 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:04.042 16:13:33 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:04.042 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:04.042 16:13:33 -- nvmf/common.sh@162 -- # true 00:21:04.042 16:13:34 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:04.042 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:04.042 16:13:34 -- nvmf/common.sh@163 -- # true 00:21:04.042 16:13:34 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:04.301 16:13:34 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:04.301 16:13:34 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:04.301 16:13:34 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:04.301 16:13:34 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:04.301 16:13:34 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:04.301 16:13:34 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:04.301 16:13:34 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:04.301 16:13:34 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:04.301 16:13:34 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:04.301 16:13:34 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:04.301 16:13:34 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:04.301 16:13:34 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:04.301 16:13:34 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:04.301 16:13:34 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:04.301 16:13:34 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:04.301 16:13:34 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:04.301 16:13:34 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:04.301 16:13:34 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:04.301 16:13:34 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:04.301 16:13:34 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:04.301 16:13:34 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:04.559 16:13:34 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:04.559 16:13:34 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:04.559 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:04.559 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 00:21:04.559 00:21:04.559 --- 10.0.0.2 ping statistics --- 00:21:04.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:04.559 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:21:04.559 16:13:34 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:04.559 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:04.559 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:21:04.559 00:21:04.559 --- 10.0.0.3 ping statistics --- 00:21:04.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:04.559 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:21:04.559 16:13:34 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:04.559 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:04.559 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:21:04.559 00:21:04.559 --- 10.0.0.1 ping statistics --- 00:21:04.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:04.559 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:21:04.559 16:13:34 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:04.559 16:13:34 -- nvmf/common.sh@422 -- # return 0 00:21:04.559 16:13:34 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:04.559 16:13:34 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:04.559 16:13:34 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:04.559 16:13:34 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:04.559 16:13:34 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:04.559 16:13:34 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:04.559 16:13:34 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:04.559 16:13:34 -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:04.559 16:13:34 -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:21:04.559 16:13:34 -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:21:04.559 16:13:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:04.559 16:13:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:04.559 16:13:34 -- common/autotest_common.sh@10 -- # set +x 00:21:04.559 ************************************ 00:21:04.559 START TEST nvmf_digest_clean 00:21:04.559 ************************************ 00:21:04.559 16:13:34 -- common/autotest_common.sh@1111 -- # run_digest 00:21:04.559 16:13:34 -- host/digest.sh@120 -- # local dsa_initiator 00:21:04.559 16:13:34 -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:21:04.559 16:13:34 -- host/digest.sh@121 -- # dsa_initiator=false 00:21:04.559 16:13:34 -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:21:04.559 16:13:34 -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:21:04.559 16:13:34 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:04.559 16:13:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:04.559 16:13:34 -- common/autotest_common.sh@10 -- # set +x 00:21:04.559 16:13:34 -- nvmf/common.sh@470 -- # nvmfpid=91210 00:21:04.559 16:13:34 -- nvmf/common.sh@471 -- # waitforlisten 91210 00:21:04.559 16:13:34 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:04.559 16:13:34 -- common/autotest_common.sh@817 -- # '[' -z 91210 ']' 00:21:04.559 16:13:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:04.559 16:13:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:04.559 16:13:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:04.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:04.559 16:13:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:04.559 16:13:34 -- common/autotest_common.sh@10 -- # set +x 00:21:04.559 [2024-04-15 16:13:34.497019] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:21:04.559 [2024-04-15 16:13:34.497441] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:04.818 [2024-04-15 16:13:34.640316] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.818 [2024-04-15 16:13:34.726065] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:04.818 [2024-04-15 16:13:34.726380] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:04.818 [2024-04-15 16:13:34.726562] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:04.818 [2024-04-15 16:13:34.726796] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:04.818 [2024-04-15 16:13:34.726839] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:04.818 [2024-04-15 16:13:34.726967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:05.749 16:13:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:05.749 16:13:35 -- common/autotest_common.sh@850 -- # return 0 00:21:05.749 16:13:35 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:05.749 16:13:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:05.749 16:13:35 -- common/autotest_common.sh@10 -- # set +x 00:21:05.749 16:13:35 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:05.749 16:13:35 -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:21:05.749 16:13:35 -- host/digest.sh@126 -- # common_target_config 00:21:05.749 16:13:35 -- host/digest.sh@43 -- # rpc_cmd 00:21:05.749 16:13:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:05.749 16:13:35 -- common/autotest_common.sh@10 -- # set +x 00:21:05.749 null0 00:21:05.749 [2024-04-15 16:13:35.689206] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:05.749 [2024-04-15 16:13:35.713379] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:06.026 16:13:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:06.026 16:13:35 -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:21:06.026 16:13:35 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:06.026 16:13:35 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:06.026 16:13:35 -- host/digest.sh@80 -- # rw=randread 00:21:06.026 16:13:35 -- host/digest.sh@80 -- # bs=4096 00:21:06.026 16:13:35 -- host/digest.sh@80 -- # qd=128 00:21:06.026 16:13:35 -- host/digest.sh@80 -- # scan_dsa=false 00:21:06.026 16:13:35 -- host/digest.sh@83 -- # bperfpid=91243 00:21:06.026 16:13:35 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:21:06.026 16:13:35 -- host/digest.sh@84 -- # waitforlisten 91243 /var/tmp/bperf.sock 00:21:06.026 16:13:35 -- common/autotest_common.sh@817 -- # '[' -z 91243 ']' 00:21:06.026 16:13:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:06.026 16:13:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:06.026 16:13:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:06.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:06.026 16:13:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:06.026 16:13:35 -- common/autotest_common.sh@10 -- # set +x 00:21:06.026 [2024-04-15 16:13:35.779072] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:21:06.026 [2024-04-15 16:13:35.779465] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91243 ] 00:21:06.026 [2024-04-15 16:13:35.928584] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:06.285 [2024-04-15 16:13:36.011787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:07.220 16:13:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:07.220 16:13:36 -- common/autotest_common.sh@850 -- # return 0 00:21:07.220 16:13:36 -- host/digest.sh@86 -- # false 00:21:07.220 16:13:36 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:07.220 16:13:36 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:07.220 16:13:37 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:07.220 16:13:37 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:07.787 nvme0n1 00:21:07.787 16:13:37 -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:07.787 16:13:37 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:07.787 Running I/O for 2 seconds... 00:21:10.333 00:21:10.333 Latency(us) 00:21:10.333 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:10.333 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:10.333 nvme0n1 : 2.01 15546.92 60.73 0.00 0.00 8227.60 7489.83 19223.89 00:21:10.333 =================================================================================================================== 00:21:10.333 Total : 15546.92 60.73 0.00 0.00 8227.60 7489.83 19223.89 00:21:10.333 0 00:21:10.333 16:13:39 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:10.333 16:13:39 -- host/digest.sh@93 -- # get_accel_stats 00:21:10.333 16:13:39 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:10.333 16:13:39 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:10.333 16:13:39 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:10.333 | select(.opcode=="crc32c") 00:21:10.333 | "\(.module_name) \(.executed)"' 00:21:10.333 16:13:39 -- host/digest.sh@94 -- # false 00:21:10.333 16:13:39 -- host/digest.sh@94 -- # exp_module=software 00:21:10.333 16:13:39 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:10.333 16:13:39 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:10.333 16:13:39 -- host/digest.sh@98 -- # killprocess 91243 00:21:10.333 16:13:39 -- common/autotest_common.sh@936 -- # '[' -z 91243 ']' 00:21:10.333 16:13:39 -- common/autotest_common.sh@940 -- # kill -0 91243 00:21:10.333 16:13:39 -- common/autotest_common.sh@941 -- # uname 00:21:10.334 16:13:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:10.334 16:13:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 91243 00:21:10.334 16:13:40 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:10.334 16:13:40 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:10.334 16:13:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 91243' 00:21:10.334 killing process with pid 91243 00:21:10.334 16:13:40 -- common/autotest_common.sh@955 -- # kill 91243 00:21:10.334 Received shutdown signal, test time was about 2.000000 seconds 00:21:10.334 00:21:10.334 Latency(us) 00:21:10.334 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:10.334 =================================================================================================================== 00:21:10.334 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:10.334 16:13:40 -- common/autotest_common.sh@960 -- # wait 91243 00:21:10.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:10.591 16:13:40 -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:21:10.591 16:13:40 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:10.591 16:13:40 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:10.591 16:13:40 -- host/digest.sh@80 -- # rw=randread 00:21:10.591 16:13:40 -- host/digest.sh@80 -- # bs=131072 00:21:10.591 16:13:40 -- host/digest.sh@80 -- # qd=16 00:21:10.591 16:13:40 -- host/digest.sh@80 -- # scan_dsa=false 00:21:10.591 16:13:40 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:21:10.591 16:13:40 -- host/digest.sh@83 -- # bperfpid=91308 00:21:10.591 16:13:40 -- host/digest.sh@84 -- # waitforlisten 91308 /var/tmp/bperf.sock 00:21:10.591 16:13:40 -- common/autotest_common.sh@817 -- # '[' -z 91308 ']' 00:21:10.591 16:13:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:10.591 16:13:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:10.591 16:13:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:10.591 16:13:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:10.591 16:13:40 -- common/autotest_common.sh@10 -- # set +x 00:21:10.591 [2024-04-15 16:13:40.372603] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:21:10.591 [2024-04-15 16:13:40.372936] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91308 ] 00:21:10.591 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:10.591 Zero copy mechanism will not be used. 00:21:10.591 [2024-04-15 16:13:40.516034] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.850 [2024-04-15 16:13:40.599757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:11.416 16:13:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:11.416 16:13:41 -- common/autotest_common.sh@850 -- # return 0 00:21:11.416 16:13:41 -- host/digest.sh@86 -- # false 00:21:11.416 16:13:41 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:11.416 16:13:41 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:11.982 16:13:41 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:11.982 16:13:41 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:12.241 nvme0n1 00:21:12.241 16:13:42 -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:12.241 16:13:42 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:12.241 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:12.241 Zero copy mechanism will not be used. 00:21:12.241 Running I/O for 2 seconds... 00:21:14.201 00:21:14.201 Latency(us) 00:21:14.201 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:14.201 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:21:14.201 nvme0n1 : 2.00 7053.36 881.67 0.00 0.00 2265.58 1880.26 3900.95 00:21:14.201 =================================================================================================================== 00:21:14.201 Total : 7053.36 881.67 0.00 0.00 2265.58 1880.26 3900.95 00:21:14.201 0 00:21:14.201 16:13:44 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:14.201 16:13:44 -- host/digest.sh@93 -- # get_accel_stats 00:21:14.460 16:13:44 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:14.460 16:13:44 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:14.460 | select(.opcode=="crc32c") 00:21:14.460 | "\(.module_name) \(.executed)"' 00:21:14.460 16:13:44 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:14.718 16:13:44 -- host/digest.sh@94 -- # false 00:21:14.718 16:13:44 -- host/digest.sh@94 -- # exp_module=software 00:21:14.718 16:13:44 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:14.718 16:13:44 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:14.718 16:13:44 -- host/digest.sh@98 -- # killprocess 91308 00:21:14.718 16:13:44 -- common/autotest_common.sh@936 -- # '[' -z 91308 ']' 00:21:14.718 16:13:44 -- common/autotest_common.sh@940 -- # kill -0 91308 00:21:14.718 16:13:44 -- common/autotest_common.sh@941 -- # uname 00:21:14.718 16:13:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:14.718 16:13:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 91308 00:21:14.718 16:13:44 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:14.718 16:13:44 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:14.718 16:13:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 91308' 00:21:14.718 killing process with pid 91308 00:21:14.718 16:13:44 -- common/autotest_common.sh@955 -- # kill 91308 00:21:14.718 Received shutdown signal, test time was about 2.000000 seconds 00:21:14.718 00:21:14.718 Latency(us) 00:21:14.718 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:14.718 =================================================================================================================== 00:21:14.718 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:14.718 16:13:44 -- common/autotest_common.sh@960 -- # wait 91308 00:21:14.976 16:13:44 -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:21:14.976 16:13:44 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:14.976 16:13:44 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:14.976 16:13:44 -- host/digest.sh@80 -- # rw=randwrite 00:21:14.976 16:13:44 -- host/digest.sh@80 -- # bs=4096 00:21:14.976 16:13:44 -- host/digest.sh@80 -- # qd=128 00:21:14.976 16:13:44 -- host/digest.sh@80 -- # scan_dsa=false 00:21:14.976 16:13:44 -- host/digest.sh@83 -- # bperfpid=91364 00:21:14.976 16:13:44 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:21:14.976 16:13:44 -- host/digest.sh@84 -- # waitforlisten 91364 /var/tmp/bperf.sock 00:21:14.976 16:13:44 -- common/autotest_common.sh@817 -- # '[' -z 91364 ']' 00:21:14.976 16:13:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:14.976 16:13:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:14.976 16:13:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:14.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:14.976 16:13:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:14.976 16:13:44 -- common/autotest_common.sh@10 -- # set +x 00:21:14.976 [2024-04-15 16:13:44.868773] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:21:14.976 [2024-04-15 16:13:44.869136] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91364 ] 00:21:15.234 [2024-04-15 16:13:45.009104] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:15.234 [2024-04-15 16:13:45.092197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:15.234 16:13:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:15.234 16:13:45 -- common/autotest_common.sh@850 -- # return 0 00:21:15.234 16:13:45 -- host/digest.sh@86 -- # false 00:21:15.234 16:13:45 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:15.234 16:13:45 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:15.800 16:13:45 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:15.801 16:13:45 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:15.801 nvme0n1 00:21:16.058 16:13:45 -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:16.058 16:13:45 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:16.058 Running I/O for 2 seconds... 00:21:17.963 00:21:17.963 Latency(us) 00:21:17.963 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:17.963 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:17.963 nvme0n1 : 2.00 17889.79 69.88 0.00 0.00 7149.05 3136.37 14293.09 00:21:17.963 =================================================================================================================== 00:21:17.963 Total : 17889.79 69.88 0.00 0.00 7149.05 3136.37 14293.09 00:21:17.963 0 00:21:18.222 16:13:47 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:18.222 16:13:47 -- host/digest.sh@93 -- # get_accel_stats 00:21:18.222 16:13:47 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:18.222 16:13:47 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:18.222 16:13:47 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:18.222 | select(.opcode=="crc32c") 00:21:18.222 | "\(.module_name) \(.executed)"' 00:21:18.222 16:13:48 -- host/digest.sh@94 -- # false 00:21:18.222 16:13:48 -- host/digest.sh@94 -- # exp_module=software 00:21:18.222 16:13:48 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:18.222 16:13:48 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:18.222 16:13:48 -- host/digest.sh@98 -- # killprocess 91364 00:21:18.222 16:13:48 -- common/autotest_common.sh@936 -- # '[' -z 91364 ']' 00:21:18.222 16:13:48 -- common/autotest_common.sh@940 -- # kill -0 91364 00:21:18.222 16:13:48 -- common/autotest_common.sh@941 -- # uname 00:21:18.222 16:13:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:18.222 16:13:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 91364 00:21:18.222 16:13:48 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:18.222 16:13:48 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:18.222 16:13:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 91364' 00:21:18.222 killing process with pid 91364 00:21:18.222 16:13:48 -- common/autotest_common.sh@955 -- # kill 91364 00:21:18.222 Received shutdown signal, test time was about 2.000000 seconds 00:21:18.222 00:21:18.222 Latency(us) 00:21:18.222 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:18.222 =================================================================================================================== 00:21:18.222 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:18.222 16:13:48 -- common/autotest_common.sh@960 -- # wait 91364 00:21:18.481 16:13:48 -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:21:18.481 16:13:48 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:18.481 16:13:48 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:18.481 16:13:48 -- host/digest.sh@80 -- # rw=randwrite 00:21:18.481 16:13:48 -- host/digest.sh@80 -- # bs=131072 00:21:18.481 16:13:48 -- host/digest.sh@80 -- # qd=16 00:21:18.481 16:13:48 -- host/digest.sh@80 -- # scan_dsa=false 00:21:18.481 16:13:48 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:21:18.481 16:13:48 -- host/digest.sh@83 -- # bperfpid=91416 00:21:18.481 16:13:48 -- host/digest.sh@84 -- # waitforlisten 91416 /var/tmp/bperf.sock 00:21:18.481 16:13:48 -- common/autotest_common.sh@817 -- # '[' -z 91416 ']' 00:21:18.481 16:13:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:18.481 16:13:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:18.481 16:13:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:18.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:18.481 16:13:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:18.481 16:13:48 -- common/autotest_common.sh@10 -- # set +x 00:21:18.481 [2024-04-15 16:13:48.413746] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:21:18.481 [2024-04-15 16:13:48.413985] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91416 ] 00:21:18.481 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:18.481 Zero copy mechanism will not be used. 00:21:18.780 [2024-04-15 16:13:48.550514] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.780 [2024-04-15 16:13:48.600043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:18.781 16:13:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:18.781 16:13:48 -- common/autotest_common.sh@850 -- # return 0 00:21:18.781 16:13:48 -- host/digest.sh@86 -- # false 00:21:18.781 16:13:48 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:18.781 16:13:48 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:19.348 16:13:49 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:19.348 16:13:49 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:19.606 nvme0n1 00:21:19.606 16:13:49 -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:19.607 16:13:49 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:19.865 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:19.865 Zero copy mechanism will not be used. 00:21:19.865 Running I/O for 2 seconds... 00:21:21.770 00:21:21.770 Latency(us) 00:21:21.770 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:21.770 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:21:21.770 nvme0n1 : 2.00 7998.01 999.75 0.00 0.00 1996.59 1412.14 7177.75 00:21:21.770 =================================================================================================================== 00:21:21.770 Total : 7998.01 999.75 0.00 0.00 1996.59 1412.14 7177.75 00:21:21.770 0 00:21:21.770 16:13:51 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:21.770 16:13:51 -- host/digest.sh@93 -- # get_accel_stats 00:21:21.770 16:13:51 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:21.770 16:13:51 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:21.770 | select(.opcode=="crc32c") 00:21:21.770 | "\(.module_name) \(.executed)"' 00:21:21.770 16:13:51 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:22.027 16:13:51 -- host/digest.sh@94 -- # false 00:21:22.027 16:13:51 -- host/digest.sh@94 -- # exp_module=software 00:21:22.027 16:13:51 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:22.027 16:13:51 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:22.027 16:13:51 -- host/digest.sh@98 -- # killprocess 91416 00:21:22.027 16:13:51 -- common/autotest_common.sh@936 -- # '[' -z 91416 ']' 00:21:22.027 16:13:51 -- common/autotest_common.sh@940 -- # kill -0 91416 00:21:22.027 16:13:51 -- common/autotest_common.sh@941 -- # uname 00:21:22.027 16:13:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:22.027 16:13:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 91416 00:21:22.027 16:13:51 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:22.027 16:13:51 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:22.027 16:13:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 91416' 00:21:22.027 killing process with pid 91416 00:21:22.027 16:13:51 -- common/autotest_common.sh@955 -- # kill 91416 00:21:22.027 Received shutdown signal, test time was about 2.000000 seconds 00:21:22.027 00:21:22.027 Latency(us) 00:21:22.027 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:22.027 =================================================================================================================== 00:21:22.027 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:22.027 16:13:51 -- common/autotest_common.sh@960 -- # wait 91416 00:21:22.323 16:13:52 -- host/digest.sh@132 -- # killprocess 91210 00:21:22.323 16:13:52 -- common/autotest_common.sh@936 -- # '[' -z 91210 ']' 00:21:22.323 16:13:52 -- common/autotest_common.sh@940 -- # kill -0 91210 00:21:22.323 16:13:52 -- common/autotest_common.sh@941 -- # uname 00:21:22.323 16:13:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:22.323 16:13:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 91210 00:21:22.323 16:13:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:22.323 16:13:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:22.323 16:13:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 91210' 00:21:22.323 killing process with pid 91210 00:21:22.323 16:13:52 -- common/autotest_common.sh@955 -- # kill 91210 00:21:22.323 16:13:52 -- common/autotest_common.sh@960 -- # wait 91210 00:21:22.581 00:21:22.581 real 0m17.896s 00:21:22.581 user 0m33.598s 00:21:22.581 sys 0m5.692s 00:21:22.581 16:13:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:22.581 16:13:52 -- common/autotest_common.sh@10 -- # set +x 00:21:22.581 ************************************ 00:21:22.581 END TEST nvmf_digest_clean 00:21:22.581 ************************************ 00:21:22.581 16:13:52 -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:21:22.581 16:13:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:22.581 16:13:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:22.581 16:13:52 -- common/autotest_common.sh@10 -- # set +x 00:21:22.581 ************************************ 00:21:22.581 START TEST nvmf_digest_error 00:21:22.581 ************************************ 00:21:22.581 16:13:52 -- common/autotest_common.sh@1111 -- # run_digest_error 00:21:22.581 16:13:52 -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:21:22.581 16:13:52 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:22.581 16:13:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:22.581 16:13:52 -- common/autotest_common.sh@10 -- # set +x 00:21:22.581 16:13:52 -- nvmf/common.sh@470 -- # nvmfpid=91497 00:21:22.581 16:13:52 -- nvmf/common.sh@471 -- # waitforlisten 91497 00:21:22.581 16:13:52 -- common/autotest_common.sh@817 -- # '[' -z 91497 ']' 00:21:22.582 16:13:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:22.582 16:13:52 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:22.582 16:13:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:22.582 16:13:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:22.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:22.582 16:13:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:22.582 16:13:52 -- common/autotest_common.sh@10 -- # set +x 00:21:22.582 [2024-04-15 16:13:52.527786] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:21:22.582 [2024-04-15 16:13:52.528106] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:22.840 [2024-04-15 16:13:52.675008] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:22.840 [2024-04-15 16:13:52.726081] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:22.840 [2024-04-15 16:13:52.726359] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:22.840 [2024-04-15 16:13:52.726595] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:22.840 [2024-04-15 16:13:52.726798] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:22.840 [2024-04-15 16:13:52.726847] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:22.840 [2024-04-15 16:13:52.726965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.775 16:13:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:23.775 16:13:53 -- common/autotest_common.sh@850 -- # return 0 00:21:23.775 16:13:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:23.775 16:13:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:23.775 16:13:53 -- common/autotest_common.sh@10 -- # set +x 00:21:23.775 16:13:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:23.775 16:13:53 -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:21:23.775 16:13:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:23.775 16:13:53 -- common/autotest_common.sh@10 -- # set +x 00:21:23.775 [2024-04-15 16:13:53.571754] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:21:23.775 16:13:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:23.775 16:13:53 -- host/digest.sh@105 -- # common_target_config 00:21:23.775 16:13:53 -- host/digest.sh@43 -- # rpc_cmd 00:21:23.775 16:13:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:23.775 16:13:53 -- common/autotest_common.sh@10 -- # set +x 00:21:23.775 null0 00:21:23.775 [2024-04-15 16:13:53.659707] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:23.775 [2024-04-15 16:13:53.683830] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:23.775 16:13:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:23.775 16:13:53 -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:21:23.775 16:13:53 -- host/digest.sh@54 -- # local rw bs qd 00:21:23.775 16:13:53 -- host/digest.sh@56 -- # rw=randread 00:21:23.775 16:13:53 -- host/digest.sh@56 -- # bs=4096 00:21:23.775 16:13:53 -- host/digest.sh@56 -- # qd=128 00:21:23.775 16:13:53 -- host/digest.sh@58 -- # bperfpid=91534 00:21:23.775 16:13:53 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:21:23.775 16:13:53 -- host/digest.sh@60 -- # waitforlisten 91534 /var/tmp/bperf.sock 00:21:23.775 16:13:53 -- common/autotest_common.sh@817 -- # '[' -z 91534 ']' 00:21:23.775 16:13:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:23.775 16:13:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:23.775 16:13:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:23.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:23.775 16:13:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:23.775 16:13:53 -- common/autotest_common.sh@10 -- # set +x 00:21:23.775 [2024-04-15 16:13:53.733358] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:21:23.775 [2024-04-15 16:13:53.733684] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91534 ] 00:21:24.034 [2024-04-15 16:13:53.875899] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.034 [2024-04-15 16:13:53.932198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:24.292 16:13:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:24.292 16:13:54 -- common/autotest_common.sh@850 -- # return 0 00:21:24.292 16:13:54 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:24.292 16:13:54 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:24.292 16:13:54 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:24.292 16:13:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:24.292 16:13:54 -- common/autotest_common.sh@10 -- # set +x 00:21:24.292 16:13:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:24.292 16:13:54 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:24.292 16:13:54 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:24.859 nvme0n1 00:21:24.859 16:13:54 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:21:24.859 16:13:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:24.859 16:13:54 -- common/autotest_common.sh@10 -- # set +x 00:21:24.859 16:13:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:24.859 16:13:54 -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:24.859 16:13:54 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:24.859 Running I/O for 2 seconds... 00:21:24.859 [2024-04-15 16:13:54.812049] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:24.859 [2024-04-15 16:13:54.812254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.859 [2024-04-15 16:13:54.812359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.119 [2024-04-15 16:13:54.827381] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.119 [2024-04-15 16:13:54.827557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.119 [2024-04-15 16:13:54.827735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.119 [2024-04-15 16:13:54.842377] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.119 [2024-04-15 16:13:54.842562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.119 [2024-04-15 16:13:54.842689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.119 [2024-04-15 16:13:54.857996] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.119 [2024-04-15 16:13:54.858185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.119 [2024-04-15 16:13:54.858296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.119 [2024-04-15 16:13:54.873482] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.119 [2024-04-15 16:13:54.873677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.119 [2024-04-15 16:13:54.873809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.119 [2024-04-15 16:13:54.889242] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.119 [2024-04-15 16:13:54.889409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.119 [2024-04-15 16:13:54.889535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.119 [2024-04-15 16:13:54.904905] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.119 [2024-04-15 16:13:54.905076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.119 [2024-04-15 16:13:54.905201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.119 [2024-04-15 16:13:54.920462] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.119 [2024-04-15 16:13:54.920760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.119 [2024-04-15 16:13:54.920898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.119 [2024-04-15 16:13:54.935822] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.119 [2024-04-15 16:13:54.936096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.119 [2024-04-15 16:13:54.936221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.119 [2024-04-15 16:13:54.950738] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.119 [2024-04-15 16:13:54.951011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.119 [2024-04-15 16:13:54.951202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.119 [2024-04-15 16:13:54.965794] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.119 [2024-04-15 16:13:54.966073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:14503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.119 [2024-04-15 16:13:54.966178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.119 [2024-04-15 16:13:54.981116] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.119 [2024-04-15 16:13:54.981322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.119 [2024-04-15 16:13:54.981474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.119 [2024-04-15 16:13:54.996510] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.119 [2024-04-15 16:13:54.996720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:13013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.119 [2024-04-15 16:13:54.996824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.119 [2024-04-15 16:13:55.011843] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.119 [2024-04-15 16:13:55.012118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:7626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.119 [2024-04-15 16:13:55.012215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.119 [2024-04-15 16:13:55.027009] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.119 [2024-04-15 16:13:55.027196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:15778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.119 [2024-04-15 16:13:55.027327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.119 [2024-04-15 16:13:55.042401] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.119 [2024-04-15 16:13:55.042606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.119 [2024-04-15 16:13:55.042718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.119 [2024-04-15 16:13:55.058033] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.119 [2024-04-15 16:13:55.058293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.119 [2024-04-15 16:13:55.058449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.119 [2024-04-15 16:13:55.074714] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.119 [2024-04-15 16:13:55.074909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.119 [2024-04-15 16:13:55.075008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.378 [2024-04-15 16:13:55.091092] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.378 [2024-04-15 16:13:55.091270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:25577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.378 [2024-04-15 16:13:55.091371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.379 [2024-04-15 16:13:55.107190] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.379 [2024-04-15 16:13:55.107362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.379 [2024-04-15 16:13:55.107461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.379 [2024-04-15 16:13:55.123591] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.379 [2024-04-15 16:13:55.123766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.379 [2024-04-15 16:13:55.123934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.379 [2024-04-15 16:13:55.140153] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.379 [2024-04-15 16:13:55.140329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.379 [2024-04-15 16:13:55.140432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.379 [2024-04-15 16:13:55.156502] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.379 [2024-04-15 16:13:55.156703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.379 [2024-04-15 16:13:55.156904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.379 [2024-04-15 16:13:55.173336] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.379 [2024-04-15 16:13:55.173514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:10574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.379 [2024-04-15 16:13:55.173642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.379 [2024-04-15 16:13:55.189690] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.379 [2024-04-15 16:13:55.189864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.379 [2024-04-15 16:13:55.189967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.379 [2024-04-15 16:13:55.206009] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.379 [2024-04-15 16:13:55.206197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:9208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.379 [2024-04-15 16:13:55.206301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.379 [2024-04-15 16:13:55.223044] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.379 [2024-04-15 16:13:55.223254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:18681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.379 [2024-04-15 16:13:55.223350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.379 [2024-04-15 16:13:55.239068] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.379 [2024-04-15 16:13:55.239235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.379 [2024-04-15 16:13:55.239366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.379 [2024-04-15 16:13:55.254939] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.379 [2024-04-15 16:13:55.255131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.379 [2024-04-15 16:13:55.255233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.379 [2024-04-15 16:13:55.271198] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.379 [2024-04-15 16:13:55.271369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.379 [2024-04-15 16:13:55.271521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.379 [2024-04-15 16:13:55.287410] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.379 [2024-04-15 16:13:55.287607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.379 [2024-04-15 16:13:55.287745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.379 [2024-04-15 16:13:55.303205] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.379 [2024-04-15 16:13:55.303426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.379 [2024-04-15 16:13:55.303542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.379 [2024-04-15 16:13:55.318874] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.379 [2024-04-15 16:13:55.319036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:10097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.379 [2024-04-15 16:13:55.319140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.379 [2024-04-15 16:13:55.333829] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.379 [2024-04-15 16:13:55.333999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.379 [2024-04-15 16:13:55.334139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.638 [2024-04-15 16:13:55.349897] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.638 [2024-04-15 16:13:55.350065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:25327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.638 [2024-04-15 16:13:55.350161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.638 [2024-04-15 16:13:55.365944] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.638 [2024-04-15 16:13:55.366117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:17502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.638 [2024-04-15 16:13:55.366249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.638 [2024-04-15 16:13:55.382694] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.638 [2024-04-15 16:13:55.382939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.638 [2024-04-15 16:13:55.383104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.638 [2024-04-15 16:13:55.400342] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.638 [2024-04-15 16:13:55.400668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:17144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.638 [2024-04-15 16:13:55.400848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.638 [2024-04-15 16:13:55.416613] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.638 [2024-04-15 16:13:55.416816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.638 [2024-04-15 16:13:55.416947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.638 [2024-04-15 16:13:55.433302] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.638 [2024-04-15 16:13:55.433531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:24285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.638 [2024-04-15 16:13:55.433710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.638 [2024-04-15 16:13:55.450384] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.638 [2024-04-15 16:13:55.450649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.638 [2024-04-15 16:13:55.450754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.638 [2024-04-15 16:13:55.467072] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.638 [2024-04-15 16:13:55.467298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:20675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.638 [2024-04-15 16:13:55.467493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.638 [2024-04-15 16:13:55.483182] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.638 [2024-04-15 16:13:55.483361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.638 [2024-04-15 16:13:55.483455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.638 [2024-04-15 16:13:55.498766] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.638 [2024-04-15 16:13:55.498943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:18643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.638 [2024-04-15 16:13:55.499084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.638 [2024-04-15 16:13:55.514676] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.638 [2024-04-15 16:13:55.514913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:21883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.638 [2024-04-15 16:13:55.515059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.638 [2024-04-15 16:13:55.530666] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.638 [2024-04-15 16:13:55.530928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:2730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.638 [2024-04-15 16:13:55.531038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.638 [2024-04-15 16:13:55.546817] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.639 [2024-04-15 16:13:55.547058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:16578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.639 [2024-04-15 16:13:55.547184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.639 [2024-04-15 16:13:55.564005] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.639 [2024-04-15 16:13:55.564201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.639 [2024-04-15 16:13:55.564317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.639 [2024-04-15 16:13:55.580897] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.639 [2024-04-15 16:13:55.581076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.639 [2024-04-15 16:13:55.581179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.639 [2024-04-15 16:13:55.597037] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.639 [2024-04-15 16:13:55.597229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:14269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.639 [2024-04-15 16:13:55.597397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.897 [2024-04-15 16:13:55.613055] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.897 [2024-04-15 16:13:55.613217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:22788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.897 [2024-04-15 16:13:55.613311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.897 [2024-04-15 16:13:55.628783] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.897 [2024-04-15 16:13:55.628972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:13490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.897 [2024-04-15 16:13:55.629074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.897 [2024-04-15 16:13:55.644307] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.897 [2024-04-15 16:13:55.644465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:11563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.897 [2024-04-15 16:13:55.644612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.897 [2024-04-15 16:13:55.659873] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.897 [2024-04-15 16:13:55.660045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.897 [2024-04-15 16:13:55.660143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.897 [2024-04-15 16:13:55.675648] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.897 [2024-04-15 16:13:55.675820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:8519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.897 [2024-04-15 16:13:55.675918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.897 [2024-04-15 16:13:55.691727] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.897 [2024-04-15 16:13:55.691885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:4506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.897 [2024-04-15 16:13:55.691988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.897 [2024-04-15 16:13:55.707476] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.897 [2024-04-15 16:13:55.707652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.898 [2024-04-15 16:13:55.707773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.898 [2024-04-15 16:13:55.723297] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.898 [2024-04-15 16:13:55.723474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.898 [2024-04-15 16:13:55.723586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.898 [2024-04-15 16:13:55.739147] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.898 [2024-04-15 16:13:55.739334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:20057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.898 [2024-04-15 16:13:55.739507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.898 [2024-04-15 16:13:55.754962] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.898 [2024-04-15 16:13:55.755115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:24566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.898 [2024-04-15 16:13:55.755221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.898 [2024-04-15 16:13:55.770661] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.898 [2024-04-15 16:13:55.770859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.898 [2024-04-15 16:13:55.770968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.898 [2024-04-15 16:13:55.786302] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.898 [2024-04-15 16:13:55.786479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.898 [2024-04-15 16:13:55.786658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.898 [2024-04-15 16:13:55.802669] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.898 [2024-04-15 16:13:55.802774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:12085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.898 [2024-04-15 16:13:55.802932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.898 [2024-04-15 16:13:55.826218] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.898 [2024-04-15 16:13:55.826396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:8319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.898 [2024-04-15 16:13:55.826499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.898 [2024-04-15 16:13:55.842351] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.898 [2024-04-15 16:13:55.842517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:13657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.898 [2024-04-15 16:13:55.842640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.898 [2024-04-15 16:13:55.858216] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:25.898 [2024-04-15 16:13:55.858323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.898 [2024-04-15 16:13:55.858385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.157 [2024-04-15 16:13:55.874414] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:26.157 [2024-04-15 16:13:55.874600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:17158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.157 [2024-04-15 16:13:55.874724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.157 [2024-04-15 16:13:55.890261] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:26.157 [2024-04-15 16:13:55.890428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:14747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.157 [2024-04-15 16:13:55.890525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.157 [2024-04-15 16:13:55.906051] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:26.157 [2024-04-15 16:13:55.906227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.157 [2024-04-15 16:13:55.906338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.157 [2024-04-15 16:13:55.921892] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:26.157 [2024-04-15 16:13:55.922075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:22513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.157 [2024-04-15 16:13:55.922188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.157 [2024-04-15 16:13:55.937439] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:26.157 [2024-04-15 16:13:55.937543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.157 [2024-04-15 16:13:55.937628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.157 [2024-04-15 16:13:55.953061] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:26.157 [2024-04-15 16:13:55.953234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:18786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.157 [2024-04-15 16:13:55.953367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.157 [2024-04-15 16:13:55.968679] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:26.158 [2024-04-15 16:13:55.968833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:16392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.158 [2024-04-15 16:13:55.968944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.158 [2024-04-15 16:13:55.984559] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:26.158 [2024-04-15 16:13:55.984739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.158 [2024-04-15 16:13:55.984889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.158 [2024-04-15 16:13:56.000342] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:26.158 [2024-04-15 16:13:56.000506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:20775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.158 [2024-04-15 16:13:56.000625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.158 [2024-04-15 16:13:56.015911] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:26.158 [2024-04-15 16:13:56.016065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.158 [2024-04-15 16:13:56.016224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.158 [2024-04-15 16:13:56.031670] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:26.158 [2024-04-15 16:13:56.031862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:7692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.158 [2024-04-15 16:13:56.032009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.158 [2024-04-15 16:13:56.048094] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:26.158 [2024-04-15 16:13:56.048272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:24972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.158 [2024-04-15 16:13:56.048433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.158 [2024-04-15 16:13:56.064648] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:26.158 [2024-04-15 16:13:56.064889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:17598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.158 [2024-04-15 16:13:56.065046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.158 [2024-04-15 16:13:56.081464] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:26.158 [2024-04-15 16:13:56.081701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:3010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.158 [2024-04-15 16:13:56.081821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.158 [2024-04-15 16:13:56.097510] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:26.158 [2024-04-15 16:13:56.097728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:24632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.158 [2024-04-15 16:13:56.097842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.158 [2024-04-15 16:13:56.113747] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:26.158 [2024-04-15 16:13:56.113917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:17582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.158 [2024-04-15 16:13:56.114024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.419 [2024-04-15 16:13:56.130118] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:26.419 [2024-04-15 16:13:56.130328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:15832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.419 [2024-04-15 16:13:56.130494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.419 [2024-04-15 16:13:56.147555] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:26.419 [2024-04-15 16:13:56.147779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.419 [2024-04-15 16:13:56.147898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.419 [2024-04-15 16:13:56.163438] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:26.419 [2024-04-15 16:13:56.163653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:18894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.419 [2024-04-15 16:13:56.163752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.419 [2024-04-15 16:13:56.180133] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:26.419 [2024-04-15 16:13:56.180310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.419 [2024-04-15 16:13:56.180425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.419 [2024-04-15 16:13:56.196913] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:26.419 [2024-04-15 16:13:56.197091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:18064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.419 [2024-04-15 16:13:56.197226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.419 [2024-04-15 16:13:56.213403] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:26.419 [2024-04-15 16:13:56.213671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:22040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.419 [2024-04-15 16:13:56.213775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.419 [2024-04-15 16:13:56.229418] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:26.419 [2024-04-15 16:13:56.229610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.419 [2024-04-15 16:13:56.229733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.419 [2024-04-15 16:13:56.245400] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:26.419 [2024-04-15 16:13:56.245591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:25559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.419 [2024-04-15 16:13:56.245690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.419 [2024-04-15 16:13:56.261157] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:26.419 [2024-04-15 16:13:56.261311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.419 [2024-04-15 16:13:56.261402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.419 [2024-04-15 16:13:56.276830] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:26.419 [2024-04-15 16:13:56.276985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.419 [2024-04-15 16:13:56.277084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.419 [2024-04-15 16:13:56.292701] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:26.419 [2024-04-15 16:13:56.292897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:19516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.419 [2024-04-15 16:13:56.293032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.419 [2024-04-15 16:13:56.308819] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:26.419 [2024-04-15 16:13:56.308984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:17518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.419 [2024-04-15 16:13:56.309096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.419 [2024-04-15 16:13:56.325023] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:26.419 [2024-04-15 16:13:56.325194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:13886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.419 [2024-04-15 16:13:56.325289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.419 [2024-04-15 16:13:56.341006] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:26.420 [2024-04-15 16:13:56.341196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.420 [2024-04-15 16:13:56.341354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.420 [2024-04-15 16:13:56.357460] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:26.420 [2024-04-15 16:13:56.357676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.420 [2024-04-15 16:13:56.357842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.420 [2024-04-15 16:13:56.374101] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:26.420 [2024-04-15 16:13:56.374282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:10891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.420 [2024-04-15 16:13:56.374388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.678 [2024-04-15 16:13:56.390708] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:26.678 [2024-04-15 16:13:56.390924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:13912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.678 [2024-04-15 16:13:56.391112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.678 [2024-04-15 16:13:56.407046] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:26.678 [2024-04-15 16:13:56.407235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.678 [2024-04-15 16:13:56.407428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.678 [2024-04-15 16:13:56.423653] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:26.678 [2024-04-15 16:13:56.423844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:11056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.678 [2024-04-15 16:13:56.424023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.678 [2024-04-15 16:13:56.440294] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:26.678 [2024-04-15 16:13:56.440499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:15917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.678 [2024-04-15 16:13:56.440687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.679 [2024-04-15 16:13:56.456906] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:26.679 [2024-04-15 16:13:56.457132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:10336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.679 [2024-04-15 16:13:56.457310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.679 [2024-04-15 16:13:56.473688] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:26.679 [2024-04-15 16:13:56.473914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.679 [2024-04-15 16:13:56.474074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.679 [2024-04-15 16:13:56.490294] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:26.679 [2024-04-15 16:13:56.490495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.679 [2024-04-15 16:13:56.490685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.679 [2024-04-15 16:13:56.507108] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:26.679 [2024-04-15 16:13:56.507290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.679 [2024-04-15 16:13:56.507398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.679 [2024-04-15 16:13:56.523246] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:26.679 [2024-04-15 16:13:56.523407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.679 [2024-04-15 16:13:56.523503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.679 [2024-04-15 16:13:56.538163] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:26.679 [2024-04-15 16:13:56.538337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:5114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.679 [2024-04-15 16:13:56.538465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.679 [2024-04-15 16:13:56.553723] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:26.679 [2024-04-15 16:13:56.553883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.679 [2024-04-15 16:13:56.553998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.679 [2024-04-15 16:13:56.569504] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:26.679 [2024-04-15 16:13:56.569797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:19825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.679 [2024-04-15 16:13:56.569938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.679 [2024-04-15 16:13:56.586658] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:26.679 [2024-04-15 16:13:56.586911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.679 [2024-04-15 16:13:56.587019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.679 [2024-04-15 16:13:56.603572] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:26.679 [2024-04-15 16:13:56.603859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.679 [2024-04-15 16:13:56.604003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.679 [2024-04-15 16:13:56.620159] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:26.679 [2024-04-15 16:13:56.620409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.679 [2024-04-15 16:13:56.620582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.679 [2024-04-15 16:13:56.637352] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:26.679 [2024-04-15 16:13:56.637692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.679 [2024-04-15 16:13:56.637966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.938 [2024-04-15 16:13:56.653866] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:26.938 [2024-04-15 16:13:56.654120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:23911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.938 [2024-04-15 16:13:56.654317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.938 [2024-04-15 16:13:56.670250] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:26.938 [2024-04-15 16:13:56.670526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:24210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.938 [2024-04-15 16:13:56.670668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.938 [2024-04-15 16:13:56.686396] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:26.938 [2024-04-15 16:13:56.686632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.938 [2024-04-15 16:13:56.686767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.938 [2024-04-15 16:13:56.702260] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:26.938 [2024-04-15 16:13:56.702445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.938 [2024-04-15 16:13:56.702559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.938 [2024-04-15 16:13:56.718414] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:26.938 [2024-04-15 16:13:56.718714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.938 [2024-04-15 16:13:56.718822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.938 [2024-04-15 16:13:56.734371] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:26.938 [2024-04-15 16:13:56.734556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.938 [2024-04-15 16:13:56.734682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.938 [2024-04-15 16:13:56.750757] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:26.938 [2024-04-15 16:13:56.750936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.938 [2024-04-15 16:13:56.751032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.938 [2024-04-15 16:13:56.766936] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:26.938 [2024-04-15 16:13:56.767121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.938 [2024-04-15 16:13:56.767218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.938 [2024-04-15 16:13:56.783358] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:26.938 [2024-04-15 16:13:56.783552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.938 [2024-04-15 16:13:56.783714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.938 [2024-04-15 16:13:56.799479] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f770) 00:21:26.938 [2024-04-15 16:13:56.799723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.938 [2024-04-15 16:13:56.799825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.938 00:21:26.938 Latency(us) 00:21:26.938 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:26.938 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:26.938 nvme0n1 : 2.01 15729.00 61.44 0.00 0.00 8130.56 7021.71 31207.62 00:21:26.938 =================================================================================================================== 00:21:26.938 Total : 15729.00 61.44 0.00 0.00 8130.56 7021.71 31207.62 00:21:26.938 0 00:21:26.938 16:13:56 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:26.938 16:13:56 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:26.938 16:13:56 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:26.938 | .driver_specific 00:21:26.938 | .nvme_error 00:21:26.938 | .status_code 00:21:26.938 | .command_transient_transport_error' 00:21:26.938 16:13:56 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:27.196 16:13:57 -- host/digest.sh@71 -- # (( 124 > 0 )) 00:21:27.196 16:13:57 -- host/digest.sh@73 -- # killprocess 91534 00:21:27.196 16:13:57 -- common/autotest_common.sh@936 -- # '[' -z 91534 ']' 00:21:27.196 16:13:57 -- common/autotest_common.sh@940 -- # kill -0 91534 00:21:27.196 16:13:57 -- common/autotest_common.sh@941 -- # uname 00:21:27.196 16:13:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:27.196 16:13:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 91534 00:21:27.196 killing process with pid 91534 00:21:27.196 Received shutdown signal, test time was about 2.000000 seconds 00:21:27.196 00:21:27.196 Latency(us) 00:21:27.196 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:27.196 =================================================================================================================== 00:21:27.196 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:27.196 16:13:57 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:27.196 16:13:57 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:27.196 16:13:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 91534' 00:21:27.196 16:13:57 -- common/autotest_common.sh@955 -- # kill 91534 00:21:27.196 16:13:57 -- common/autotest_common.sh@960 -- # wait 91534 00:21:27.454 16:13:57 -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:21:27.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:27.454 16:13:57 -- host/digest.sh@54 -- # local rw bs qd 00:21:27.454 16:13:57 -- host/digest.sh@56 -- # rw=randread 00:21:27.454 16:13:57 -- host/digest.sh@56 -- # bs=131072 00:21:27.454 16:13:57 -- host/digest.sh@56 -- # qd=16 00:21:27.454 16:13:57 -- host/digest.sh@58 -- # bperfpid=91581 00:21:27.454 16:13:57 -- host/digest.sh@60 -- # waitforlisten 91581 /var/tmp/bperf.sock 00:21:27.454 16:13:57 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:21:27.454 16:13:57 -- common/autotest_common.sh@817 -- # '[' -z 91581 ']' 00:21:27.454 16:13:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:27.454 16:13:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:27.454 16:13:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:27.454 16:13:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:27.454 16:13:57 -- common/autotest_common.sh@10 -- # set +x 00:21:27.454 [2024-04-15 16:13:57.403366] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:21:27.454 [2024-04-15 16:13:57.403726] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6I/O size of 131072 is greater than zero copy threshold (65536). 00:21:27.454 Zero copy mechanism will not be used. 00:21:27.454 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91581 ] 00:21:27.713 [2024-04-15 16:13:57.552258] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.713 [2024-04-15 16:13:57.600018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:27.971 16:13:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:27.971 16:13:57 -- common/autotest_common.sh@850 -- # return 0 00:21:27.971 16:13:57 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:27.971 16:13:57 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:27.971 16:13:57 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:27.971 16:13:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.971 16:13:57 -- common/autotest_common.sh@10 -- # set +x 00:21:27.971 16:13:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.971 16:13:57 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:27.971 16:13:57 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:28.540 nvme0n1 00:21:28.540 16:13:58 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:21:28.540 16:13:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:28.540 16:13:58 -- common/autotest_common.sh@10 -- # set +x 00:21:28.540 16:13:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:28.540 16:13:58 -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:28.540 16:13:58 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:28.540 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:28.540 Zero copy mechanism will not be used. 00:21:28.540 Running I/O for 2 seconds... 00:21:28.540 [2024-04-15 16:13:58.364178] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.540 [2024-04-15 16:13:58.364458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.540 [2024-04-15 16:13:58.364661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.540 [2024-04-15 16:13:58.368665] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.540 [2024-04-15 16:13:58.368893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.540 [2024-04-15 16:13:58.369082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.540 [2024-04-15 16:13:58.373160] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.540 [2024-04-15 16:13:58.373345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.540 [2024-04-15 16:13:58.373458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.540 [2024-04-15 16:13:58.377692] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.540 [2024-04-15 16:13:58.377871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.540 [2024-04-15 16:13:58.377984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.540 [2024-04-15 16:13:58.382471] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.540 [2024-04-15 16:13:58.382668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.540 [2024-04-15 16:13:58.382784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.540 [2024-04-15 16:13:58.386918] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.540 [2024-04-15 16:13:58.387080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.540 [2024-04-15 16:13:58.387179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.540 [2024-04-15 16:13:58.391214] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.540 [2024-04-15 16:13:58.391379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.540 [2024-04-15 16:13:58.391481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.540 [2024-04-15 16:13:58.395625] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.540 [2024-04-15 16:13:58.395787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.540 [2024-04-15 16:13:58.395992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.540 [2024-04-15 16:13:58.400134] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.540 [2024-04-15 16:13:58.400300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.540 [2024-04-15 16:13:58.400395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.540 [2024-04-15 16:13:58.404397] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.540 [2024-04-15 16:13:58.404560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.540 [2024-04-15 16:13:58.404778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.540 [2024-04-15 16:13:58.408869] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.541 [2024-04-15 16:13:58.409033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.541 [2024-04-15 16:13:58.409132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.541 [2024-04-15 16:13:58.413073] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.541 [2024-04-15 16:13:58.413255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.541 [2024-04-15 16:13:58.413357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.541 [2024-04-15 16:13:58.417511] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.541 [2024-04-15 16:13:58.417714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.541 [2024-04-15 16:13:58.417817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.541 [2024-04-15 16:13:58.421890] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.541 [2024-04-15 16:13:58.422062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.541 [2024-04-15 16:13:58.422173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.541 [2024-04-15 16:13:58.426180] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.541 [2024-04-15 16:13:58.426354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.541 [2024-04-15 16:13:58.426476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.541 [2024-04-15 16:13:58.430590] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.541 [2024-04-15 16:13:58.430748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.541 [2024-04-15 16:13:58.430953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.541 [2024-04-15 16:13:58.435081] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.541 [2024-04-15 16:13:58.435275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.541 [2024-04-15 16:13:58.435409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.541 [2024-04-15 16:13:58.439671] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.541 [2024-04-15 16:13:58.439834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.541 [2024-04-15 16:13:58.439985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.541 [2024-04-15 16:13:58.444006] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.541 [2024-04-15 16:13:58.444196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.541 [2024-04-15 16:13:58.444313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.541 [2024-04-15 16:13:58.448343] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.541 [2024-04-15 16:13:58.448527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.541 [2024-04-15 16:13:58.448648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.541 [2024-04-15 16:13:58.452689] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.541 [2024-04-15 16:13:58.452875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.541 [2024-04-15 16:13:58.453079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.541 [2024-04-15 16:13:58.457290] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.541 [2024-04-15 16:13:58.457476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.541 [2024-04-15 16:13:58.457675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.541 [2024-04-15 16:13:58.461736] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.541 [2024-04-15 16:13:58.461912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.541 [2024-04-15 16:13:58.462060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.541 [2024-04-15 16:13:58.466312] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.541 [2024-04-15 16:13:58.466492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.541 [2024-04-15 16:13:58.466734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.541 [2024-04-15 16:13:58.470905] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.541 [2024-04-15 16:13:58.471085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.541 [2024-04-15 16:13:58.471383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.541 [2024-04-15 16:13:58.475666] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.541 [2024-04-15 16:13:58.475863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.541 [2024-04-15 16:13:58.476006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.541 [2024-04-15 16:13:58.480072] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.541 [2024-04-15 16:13:58.480251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.541 [2024-04-15 16:13:58.480434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.541 [2024-04-15 16:13:58.484468] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.541 [2024-04-15 16:13:58.484651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.541 [2024-04-15 16:13:58.484754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.541 [2024-04-15 16:13:58.488601] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.541 [2024-04-15 16:13:58.488755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.541 [2024-04-15 16:13:58.488890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.541 [2024-04-15 16:13:58.492903] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.541 [2024-04-15 16:13:58.493070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.541 [2024-04-15 16:13:58.493239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.541 [2024-04-15 16:13:58.497199] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.541 [2024-04-15 16:13:58.497377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.541 [2024-04-15 16:13:58.497476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.541 [2024-04-15 16:13:58.501526] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.541 [2024-04-15 16:13:58.501738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.541 [2024-04-15 16:13:58.501862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.541 [2024-04-15 16:13:58.506017] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.541 [2024-04-15 16:13:58.506185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.541 [2024-04-15 16:13:58.506287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.802 [2024-04-15 16:13:58.510371] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.802 [2024-04-15 16:13:58.510539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.802 [2024-04-15 16:13:58.510687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.802 [2024-04-15 16:13:58.514804] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.802 [2024-04-15 16:13:58.514979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.802 [2024-04-15 16:13:58.515118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.802 [2024-04-15 16:13:58.519287] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.802 [2024-04-15 16:13:58.519449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.802 [2024-04-15 16:13:58.519641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.802 [2024-04-15 16:13:58.523732] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.802 [2024-04-15 16:13:58.523882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.802 [2024-04-15 16:13:58.523976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.802 [2024-04-15 16:13:58.527766] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.802 [2024-04-15 16:13:58.527917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.802 [2024-04-15 16:13:58.528033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.802 [2024-04-15 16:13:58.531828] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.802 [2024-04-15 16:13:58.531985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.802 [2024-04-15 16:13:58.532088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.802 [2024-04-15 16:13:58.536038] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.802 [2024-04-15 16:13:58.536193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.802 [2024-04-15 16:13:58.536282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.802 [2024-04-15 16:13:58.540183] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.802 [2024-04-15 16:13:58.540342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.803 [2024-04-15 16:13:58.540437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.803 [2024-04-15 16:13:58.544318] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.803 [2024-04-15 16:13:58.544478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.803 [2024-04-15 16:13:58.544585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.803 [2024-04-15 16:13:58.548348] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.803 [2024-04-15 16:13:58.548508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.803 [2024-04-15 16:13:58.548631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.803 [2024-04-15 16:13:58.552390] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.803 [2024-04-15 16:13:58.552547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.803 [2024-04-15 16:13:58.552661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.803 [2024-04-15 16:13:58.556458] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.803 [2024-04-15 16:13:58.556627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.803 [2024-04-15 16:13:58.556731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.803 [2024-04-15 16:13:58.560567] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.803 [2024-04-15 16:13:58.560756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.803 [2024-04-15 16:13:58.560893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.803 [2024-04-15 16:13:58.564903] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.803 [2024-04-15 16:13:58.565071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.803 [2024-04-15 16:13:58.565181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.803 [2024-04-15 16:13:58.569260] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.803 [2024-04-15 16:13:58.569426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.803 [2024-04-15 16:13:58.569516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.803 [2024-04-15 16:13:58.573494] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.803 [2024-04-15 16:13:58.573701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.803 [2024-04-15 16:13:58.573863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.803 [2024-04-15 16:13:58.577859] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.803 [2024-04-15 16:13:58.578021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.803 [2024-04-15 16:13:58.578168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.803 [2024-04-15 16:13:58.582157] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.803 [2024-04-15 16:13:58.582316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.803 [2024-04-15 16:13:58.582448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.803 [2024-04-15 16:13:58.586302] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.803 [2024-04-15 16:13:58.586465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.803 [2024-04-15 16:13:58.586564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.803 [2024-04-15 16:13:58.590420] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.803 [2024-04-15 16:13:58.590586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.803 [2024-04-15 16:13:58.590682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.803 [2024-04-15 16:13:58.594520] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.803 [2024-04-15 16:13:58.594699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.803 [2024-04-15 16:13:58.594808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.803 [2024-04-15 16:13:58.598876] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.803 [2024-04-15 16:13:58.599039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.803 [2024-04-15 16:13:58.599147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.803 [2024-04-15 16:13:58.603170] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.803 [2024-04-15 16:13:58.603352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.803 [2024-04-15 16:13:58.603452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.803 [2024-04-15 16:13:58.607487] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.803 [2024-04-15 16:13:58.607669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.803 [2024-04-15 16:13:58.607771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.803 [2024-04-15 16:13:58.611815] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.803 [2024-04-15 16:13:58.611975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.803 [2024-04-15 16:13:58.612069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.803 [2024-04-15 16:13:58.616049] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.803 [2024-04-15 16:13:58.616211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.803 [2024-04-15 16:13:58.616322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.803 [2024-04-15 16:13:58.620338] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.803 [2024-04-15 16:13:58.620498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.803 [2024-04-15 16:13:58.620673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.803 [2024-04-15 16:13:58.624695] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.803 [2024-04-15 16:13:58.624853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.803 [2024-04-15 16:13:58.625019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.803 [2024-04-15 16:13:58.629024] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.803 [2024-04-15 16:13:58.629198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.803 [2024-04-15 16:13:58.629328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.803 [2024-04-15 16:13:58.633211] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.803 [2024-04-15 16:13:58.633367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.803 [2024-04-15 16:13:58.633548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.803 [2024-04-15 16:13:58.637436] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.803 [2024-04-15 16:13:58.637633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.803 [2024-04-15 16:13:58.637789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.803 [2024-04-15 16:13:58.641496] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.803 [2024-04-15 16:13:58.641698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.803 [2024-04-15 16:13:58.641809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.803 [2024-04-15 16:13:58.645550] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.803 [2024-04-15 16:13:58.645739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.803 [2024-04-15 16:13:58.645833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.803 [2024-04-15 16:13:58.649657] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.803 [2024-04-15 16:13:58.649813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.803 [2024-04-15 16:13:58.649916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.803 [2024-04-15 16:13:58.653741] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.803 [2024-04-15 16:13:58.653896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.803 [2024-04-15 16:13:58.654011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.804 [2024-04-15 16:13:58.657947] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.804 [2024-04-15 16:13:58.658111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.804 [2024-04-15 16:13:58.658293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.804 [2024-04-15 16:13:58.662341] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.804 [2024-04-15 16:13:58.662505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.804 [2024-04-15 16:13:58.662695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.804 [2024-04-15 16:13:58.666567] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.804 [2024-04-15 16:13:58.666735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.804 [2024-04-15 16:13:58.666900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.804 [2024-04-15 16:13:58.670911] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.804 [2024-04-15 16:13:58.671079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.804 [2024-04-15 16:13:58.671179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.804 [2024-04-15 16:13:58.675056] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.804 [2024-04-15 16:13:58.675203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.804 [2024-04-15 16:13:58.675291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.804 [2024-04-15 16:13:58.679137] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.804 [2024-04-15 16:13:58.679285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.804 [2024-04-15 16:13:58.679401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.804 [2024-04-15 16:13:58.683351] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.804 [2024-04-15 16:13:58.683512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.804 [2024-04-15 16:13:58.683679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.804 [2024-04-15 16:13:58.687655] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.804 [2024-04-15 16:13:58.687812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.804 [2024-04-15 16:13:58.687918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.804 [2024-04-15 16:13:58.691708] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.804 [2024-04-15 16:13:58.691851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.804 [2024-04-15 16:13:58.691983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.804 [2024-04-15 16:13:58.695946] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.804 [2024-04-15 16:13:58.696127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.804 [2024-04-15 16:13:58.696297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.804 [2024-04-15 16:13:58.700134] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.804 [2024-04-15 16:13:58.700278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.804 [2024-04-15 16:13:58.700394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.804 [2024-04-15 16:13:58.704197] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.804 [2024-04-15 16:13:58.704364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.804 [2024-04-15 16:13:58.704458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.804 [2024-04-15 16:13:58.708306] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.804 [2024-04-15 16:13:58.708458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.804 [2024-04-15 16:13:58.708552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.804 [2024-04-15 16:13:58.712514] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.804 [2024-04-15 16:13:58.712681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.804 [2024-04-15 16:13:58.712784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.804 [2024-04-15 16:13:58.716614] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.804 [2024-04-15 16:13:58.716766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.804 [2024-04-15 16:13:58.716878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.804 [2024-04-15 16:13:58.720962] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.804 [2024-04-15 16:13:58.721123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.804 [2024-04-15 16:13:58.721240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.804 [2024-04-15 16:13:58.725415] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.804 [2024-04-15 16:13:58.725650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.804 [2024-04-15 16:13:58.725834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.804 [2024-04-15 16:13:58.729995] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.804 [2024-04-15 16:13:58.730184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.804 [2024-04-15 16:13:58.730318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.804 [2024-04-15 16:13:58.734550] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.804 [2024-04-15 16:13:58.734762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.804 [2024-04-15 16:13:58.734910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.804 [2024-04-15 16:13:58.738818] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.804 [2024-04-15 16:13:58.738994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.804 [2024-04-15 16:13:58.739101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.804 [2024-04-15 16:13:58.743152] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.804 [2024-04-15 16:13:58.743357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.804 [2024-04-15 16:13:58.743512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.804 [2024-04-15 16:13:58.747766] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.804 [2024-04-15 16:13:58.747942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.804 [2024-04-15 16:13:58.748092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:28.804 [2024-04-15 16:13:58.752184] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.804 [2024-04-15 16:13:58.752365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.804 [2024-04-15 16:13:58.752508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:28.804 [2024-04-15 16:13:58.756448] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.804 [2024-04-15 16:13:58.756647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.804 [2024-04-15 16:13:58.756764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.804 [2024-04-15 16:13:58.760602] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.804 [2024-04-15 16:13:58.760779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.804 [2024-04-15 16:13:58.760921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.804 [2024-04-15 16:13:58.764924] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:28.804 [2024-04-15 16:13:58.765096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.804 [2024-04-15 16:13:58.765215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.067 [2024-04-15 16:13:58.769332] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.067 [2024-04-15 16:13:58.769507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.067 [2024-04-15 16:13:58.769657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.067 [2024-04-15 16:13:58.773890] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.067 [2024-04-15 16:13:58.774073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.067 [2024-04-15 16:13:58.774198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.067 [2024-04-15 16:13:58.778250] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.067 [2024-04-15 16:13:58.778419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.067 [2024-04-15 16:13:58.778524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.067 [2024-04-15 16:13:58.782528] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.067 [2024-04-15 16:13:58.782727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.067 [2024-04-15 16:13:58.782830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.067 [2024-04-15 16:13:58.786878] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.067 [2024-04-15 16:13:58.787054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.067 [2024-04-15 16:13:58.787197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.067 [2024-04-15 16:13:58.791206] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.067 [2024-04-15 16:13:58.791383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.067 [2024-04-15 16:13:58.791505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.067 [2024-04-15 16:13:58.795508] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.067 [2024-04-15 16:13:58.795678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.067 [2024-04-15 16:13:58.795809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.067 [2024-04-15 16:13:58.799769] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.067 [2024-04-15 16:13:58.799922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.067 [2024-04-15 16:13:58.800024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.067 [2024-04-15 16:13:58.803970] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.067 [2024-04-15 16:13:58.804134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.067 [2024-04-15 16:13:58.804283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.067 [2024-04-15 16:13:58.808319] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.067 [2024-04-15 16:13:58.808496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.067 [2024-04-15 16:13:58.808652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.067 [2024-04-15 16:13:58.812624] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.067 [2024-04-15 16:13:58.812785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.067 [2024-04-15 16:13:58.812885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.067 [2024-04-15 16:13:58.816838] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.067 [2024-04-15 16:13:58.817001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.067 [2024-04-15 16:13:58.817143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.067 [2024-04-15 16:13:58.821110] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.067 [2024-04-15 16:13:58.821298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.067 [2024-04-15 16:13:58.821438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.067 [2024-04-15 16:13:58.825508] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.067 [2024-04-15 16:13:58.825716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.067 [2024-04-15 16:13:58.825848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.067 [2024-04-15 16:13:58.829791] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.067 [2024-04-15 16:13:58.829953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.067 [2024-04-15 16:13:58.830055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.067 [2024-04-15 16:13:58.834093] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.067 [2024-04-15 16:13:58.834261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.067 [2024-04-15 16:13:58.834362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.067 [2024-04-15 16:13:58.838401] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.067 [2024-04-15 16:13:58.838567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.067 [2024-04-15 16:13:58.838692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.067 [2024-04-15 16:13:58.842633] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.067 [2024-04-15 16:13:58.842799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.067 [2024-04-15 16:13:58.842894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.067 [2024-04-15 16:13:58.846841] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.067 [2024-04-15 16:13:58.846993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.067 [2024-04-15 16:13:58.847086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.067 [2024-04-15 16:13:58.851146] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.067 [2024-04-15 16:13:58.851308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.067 [2024-04-15 16:13:58.851406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.067 [2024-04-15 16:13:58.855616] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.067 [2024-04-15 16:13:58.855773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.067 [2024-04-15 16:13:58.855943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.067 [2024-04-15 16:13:58.860108] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.067 [2024-04-15 16:13:58.860276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.067 [2024-04-15 16:13:58.860419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.067 [2024-04-15 16:13:58.864580] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.067 [2024-04-15 16:13:58.864769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.067 [2024-04-15 16:13:58.864878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.067 [2024-04-15 16:13:58.868852] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.067 [2024-04-15 16:13:58.869036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.067 [2024-04-15 16:13:58.869175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.067 [2024-04-15 16:13:58.873001] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.067 [2024-04-15 16:13:58.873179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.067 [2024-04-15 16:13:58.873349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.067 [2024-04-15 16:13:58.877177] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.067 [2024-04-15 16:13:58.877345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.068 [2024-04-15 16:13:58.877510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.068 [2024-04-15 16:13:58.881298] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.068 [2024-04-15 16:13:58.881450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.068 [2024-04-15 16:13:58.881543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.068 [2024-04-15 16:13:58.885325] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.068 [2024-04-15 16:13:58.885479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.068 [2024-04-15 16:13:58.885648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.068 [2024-04-15 16:13:58.889487] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.068 [2024-04-15 16:13:58.889683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.068 [2024-04-15 16:13:58.889782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.068 [2024-04-15 16:13:58.893525] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.068 [2024-04-15 16:13:58.893717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.068 [2024-04-15 16:13:58.893901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.068 [2024-04-15 16:13:58.897757] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.068 [2024-04-15 16:13:58.897926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.068 [2024-04-15 16:13:58.898060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.068 [2024-04-15 16:13:58.901936] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.068 [2024-04-15 16:13:58.902108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.068 [2024-04-15 16:13:58.902212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.068 [2024-04-15 16:13:58.905981] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.068 [2024-04-15 16:13:58.906141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.068 [2024-04-15 16:13:58.906287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.068 [2024-04-15 16:13:58.910063] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.068 [2024-04-15 16:13:58.910243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.068 [2024-04-15 16:13:58.910423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.068 [2024-04-15 16:13:58.914287] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.068 [2024-04-15 16:13:58.914443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.068 [2024-04-15 16:13:58.914535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.068 [2024-04-15 16:13:58.918360] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.068 [2024-04-15 16:13:58.918531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.068 [2024-04-15 16:13:58.918685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.068 [2024-04-15 16:13:58.922637] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.068 [2024-04-15 16:13:58.922809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.068 [2024-04-15 16:13:58.922908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.068 [2024-04-15 16:13:58.926807] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.068 [2024-04-15 16:13:58.926977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.068 [2024-04-15 16:13:58.927140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.068 [2024-04-15 16:13:58.931006] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.068 [2024-04-15 16:13:58.931183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.068 [2024-04-15 16:13:58.931271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.068 [2024-04-15 16:13:58.935031] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.068 [2024-04-15 16:13:58.935182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.068 [2024-04-15 16:13:58.935273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.068 [2024-04-15 16:13:58.939003] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.068 [2024-04-15 16:13:58.939161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.068 [2024-04-15 16:13:58.939274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.068 [2024-04-15 16:13:58.943021] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.068 [2024-04-15 16:13:58.943188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.068 [2024-04-15 16:13:58.943341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.068 [2024-04-15 16:13:58.947076] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.068 [2024-04-15 16:13:58.947234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.068 [2024-04-15 16:13:58.947391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.068 [2024-04-15 16:13:58.951109] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.068 [2024-04-15 16:13:58.951269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.068 [2024-04-15 16:13:58.951430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.068 [2024-04-15 16:13:58.955209] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.068 [2024-04-15 16:13:58.955386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.068 [2024-04-15 16:13:58.955511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.068 [2024-04-15 16:13:58.959276] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.068 [2024-04-15 16:13:58.959429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.068 [2024-04-15 16:13:58.959540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.068 [2024-04-15 16:13:58.963237] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.068 [2024-04-15 16:13:58.963397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.068 [2024-04-15 16:13:58.963494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.068 [2024-04-15 16:13:58.967238] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.068 [2024-04-15 16:13:58.967401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.068 [2024-04-15 16:13:58.967497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.068 [2024-04-15 16:13:58.971235] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.068 [2024-04-15 16:13:58.971395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.068 [2024-04-15 16:13:58.971488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.068 [2024-04-15 16:13:58.975195] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.068 [2024-04-15 16:13:58.975355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.068 [2024-04-15 16:13:58.975448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.068 [2024-04-15 16:13:58.979233] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.068 [2024-04-15 16:13:58.979383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.068 [2024-04-15 16:13:58.979477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.068 [2024-04-15 16:13:58.983130] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.068 [2024-04-15 16:13:58.983289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.068 [2024-04-15 16:13:58.983383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.068 [2024-04-15 16:13:58.987093] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.068 [2024-04-15 16:13:58.987246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.068 [2024-04-15 16:13:58.987338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.069 [2024-04-15 16:13:58.991169] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.069 [2024-04-15 16:13:58.991337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.069 [2024-04-15 16:13:58.991435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.069 [2024-04-15 16:13:58.995300] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.069 [2024-04-15 16:13:58.995480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.069 [2024-04-15 16:13:58.995634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.069 [2024-04-15 16:13:58.999370] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.069 [2024-04-15 16:13:58.999524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.069 [2024-04-15 16:13:58.999657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.069 [2024-04-15 16:13:59.003624] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.069 [2024-04-15 16:13:59.003785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.069 [2024-04-15 16:13:59.003880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.069 [2024-04-15 16:13:59.007684] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.069 [2024-04-15 16:13:59.007842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.069 [2024-04-15 16:13:59.007963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.069 [2024-04-15 16:13:59.011890] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.069 [2024-04-15 16:13:59.012046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.069 [2024-04-15 16:13:59.012148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.069 [2024-04-15 16:13:59.016040] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.069 [2024-04-15 16:13:59.016195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.069 [2024-04-15 16:13:59.016287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.069 [2024-04-15 16:13:59.020116] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.069 [2024-04-15 16:13:59.020269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.069 [2024-04-15 16:13:59.020368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.069 [2024-04-15 16:13:59.024032] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.069 [2024-04-15 16:13:59.024191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.069 [2024-04-15 16:13:59.024284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.069 [2024-04-15 16:13:59.028143] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.069 [2024-04-15 16:13:59.028296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.069 [2024-04-15 16:13:59.028451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.329 [2024-04-15 16:13:59.032298] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.329 [2024-04-15 16:13:59.032448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.329 [2024-04-15 16:13:59.032540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.329 [2024-04-15 16:13:59.036336] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.330 [2024-04-15 16:13:59.036493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.330 [2024-04-15 16:13:59.036605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.330 [2024-04-15 16:13:59.040410] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.330 [2024-04-15 16:13:59.040568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.330 [2024-04-15 16:13:59.040739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.330 [2024-04-15 16:13:59.044421] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.330 [2024-04-15 16:13:59.044591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.330 [2024-04-15 16:13:59.044714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.330 [2024-04-15 16:13:59.048440] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.330 [2024-04-15 16:13:59.048609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.330 [2024-04-15 16:13:59.048740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.330 [2024-04-15 16:13:59.052491] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.330 [2024-04-15 16:13:59.052688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.330 [2024-04-15 16:13:59.052796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.330 [2024-04-15 16:13:59.056577] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.330 [2024-04-15 16:13:59.056743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.330 [2024-04-15 16:13:59.056835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.330 [2024-04-15 16:13:59.060559] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.330 [2024-04-15 16:13:59.060725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.330 [2024-04-15 16:13:59.060825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.330 [2024-04-15 16:13:59.064378] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.330 [2024-04-15 16:13:59.064536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.330 [2024-04-15 16:13:59.064646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.330 [2024-04-15 16:13:59.068457] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.330 [2024-04-15 16:13:59.068645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.330 [2024-04-15 16:13:59.068743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.330 [2024-04-15 16:13:59.072488] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.330 [2024-04-15 16:13:59.072676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.330 [2024-04-15 16:13:59.072857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.330 [2024-04-15 16:13:59.076705] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.330 [2024-04-15 16:13:59.076873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.330 [2024-04-15 16:13:59.077007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.330 [2024-04-15 16:13:59.081029] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.330 [2024-04-15 16:13:59.081205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.330 [2024-04-15 16:13:59.081339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.330 [2024-04-15 16:13:59.085317] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.330 [2024-04-15 16:13:59.085487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.330 [2024-04-15 16:13:59.085660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.330 [2024-04-15 16:13:59.089447] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.330 [2024-04-15 16:13:59.089645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.330 [2024-04-15 16:13:59.089743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.330 [2024-04-15 16:13:59.093548] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.330 [2024-04-15 16:13:59.093736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.330 [2024-04-15 16:13:59.093857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.330 [2024-04-15 16:13:59.097683] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.330 [2024-04-15 16:13:59.097835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.330 [2024-04-15 16:13:59.097967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.330 [2024-04-15 16:13:59.101885] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.330 [2024-04-15 16:13:59.102049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.330 [2024-04-15 16:13:59.102173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.330 [2024-04-15 16:13:59.105971] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.330 [2024-04-15 16:13:59.106132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.330 [2024-04-15 16:13:59.106265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.330 [2024-04-15 16:13:59.110113] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.330 [2024-04-15 16:13:59.110299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.330 [2024-04-15 16:13:59.110395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.330 [2024-04-15 16:13:59.114289] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.330 [2024-04-15 16:13:59.114444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.330 [2024-04-15 16:13:59.114541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.330 [2024-04-15 16:13:59.118367] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.330 [2024-04-15 16:13:59.118535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.330 [2024-04-15 16:13:59.118670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.330 [2024-04-15 16:13:59.122749] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.330 [2024-04-15 16:13:59.122903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.330 [2024-04-15 16:13:59.123028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.330 [2024-04-15 16:13:59.127005] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.330 [2024-04-15 16:13:59.127170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.330 [2024-04-15 16:13:59.127299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.330 [2024-04-15 16:13:59.131403] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.330 [2024-04-15 16:13:59.131595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.330 [2024-04-15 16:13:59.131696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.330 [2024-04-15 16:13:59.135704] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.330 [2024-04-15 16:13:59.135880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.330 [2024-04-15 16:13:59.136020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.330 [2024-04-15 16:13:59.140075] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.330 [2024-04-15 16:13:59.140241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.330 [2024-04-15 16:13:59.140374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.330 [2024-04-15 16:13:59.144376] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.330 [2024-04-15 16:13:59.144556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.330 [2024-04-15 16:13:59.144678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.330 [2024-04-15 16:13:59.148682] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.330 [2024-04-15 16:13:59.148853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.330 [2024-04-15 16:13:59.148992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.330 [2024-04-15 16:13:59.152851] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.330 [2024-04-15 16:13:59.153004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.330 [2024-04-15 16:13:59.153120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.330 [2024-04-15 16:13:59.156937] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.330 [2024-04-15 16:13:59.157097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.330 [2024-04-15 16:13:59.157221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.330 [2024-04-15 16:13:59.161092] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.330 [2024-04-15 16:13:59.161264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.330 [2024-04-15 16:13:59.161376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.330 [2024-04-15 16:13:59.165212] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.330 [2024-04-15 16:13:59.165382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.330 [2024-04-15 16:13:59.165520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.330 [2024-04-15 16:13:59.169446] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.330 [2024-04-15 16:13:59.169643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.330 [2024-04-15 16:13:59.169741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.330 [2024-04-15 16:13:59.173639] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.330 [2024-04-15 16:13:59.173880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.330 [2024-04-15 16:13:59.174037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.330 [2024-04-15 16:13:59.178312] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.330 [2024-04-15 16:13:59.178485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.330 [2024-04-15 16:13:59.178602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.330 [2024-04-15 16:13:59.182742] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.330 [2024-04-15 16:13:59.182930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.330 [2024-04-15 16:13:59.183087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.330 [2024-04-15 16:13:59.187018] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.330 [2024-04-15 16:13:59.187205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.330 [2024-04-15 16:13:59.187341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.330 [2024-04-15 16:13:59.191290] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.330 [2024-04-15 16:13:59.191443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.330 [2024-04-15 16:13:59.191562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.330 [2024-04-15 16:13:59.195349] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.330 [2024-04-15 16:13:59.195512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.330 [2024-04-15 16:13:59.195617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.330 [2024-04-15 16:13:59.199530] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.330 [2024-04-15 16:13:59.199717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.330 [2024-04-15 16:13:59.199894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.330 [2024-04-15 16:13:59.203739] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.330 [2024-04-15 16:13:59.203899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.330 [2024-04-15 16:13:59.204051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.330 [2024-04-15 16:13:59.207882] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.330 [2024-04-15 16:13:59.208062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.330 [2024-04-15 16:13:59.208242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.330 [2024-04-15 16:13:59.212071] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.331 [2024-04-15 16:13:59.212229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.331 [2024-04-15 16:13:59.212329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.331 [2024-04-15 16:13:59.216156] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.331 [2024-04-15 16:13:59.216336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.331 [2024-04-15 16:13:59.216431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.331 [2024-04-15 16:13:59.220207] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.331 [2024-04-15 16:13:59.220358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.331 [2024-04-15 16:13:59.220454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.331 [2024-04-15 16:13:59.224290] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.331 [2024-04-15 16:13:59.224459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.331 [2024-04-15 16:13:59.224560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.331 [2024-04-15 16:13:59.228435] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.331 [2024-04-15 16:13:59.228614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.331 [2024-04-15 16:13:59.228787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.331 [2024-04-15 16:13:59.232639] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.331 [2024-04-15 16:13:59.232789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.331 [2024-04-15 16:13:59.232879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.331 [2024-04-15 16:13:59.236732] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.331 [2024-04-15 16:13:59.236883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.331 [2024-04-15 16:13:59.237030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.331 [2024-04-15 16:13:59.240897] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.331 [2024-04-15 16:13:59.241050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.331 [2024-04-15 16:13:59.241191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.331 [2024-04-15 16:13:59.245134] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.331 [2024-04-15 16:13:59.245294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.331 [2024-04-15 16:13:59.245391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.331 [2024-04-15 16:13:59.249484] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.331 [2024-04-15 16:13:59.249700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.331 [2024-04-15 16:13:59.249837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.331 [2024-04-15 16:13:59.254010] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.331 [2024-04-15 16:13:59.254196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.331 [2024-04-15 16:13:59.254302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.331 [2024-04-15 16:13:59.258420] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.331 [2024-04-15 16:13:59.258616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.331 [2024-04-15 16:13:59.258757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.331 [2024-04-15 16:13:59.262799] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.331 [2024-04-15 16:13:59.262961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.331 [2024-04-15 16:13:59.263125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.331 [2024-04-15 16:13:59.267144] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.331 [2024-04-15 16:13:59.267333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.331 [2024-04-15 16:13:59.267433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.331 [2024-04-15 16:13:59.271489] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.331 [2024-04-15 16:13:59.271679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.331 [2024-04-15 16:13:59.271853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.331 [2024-04-15 16:13:59.275749] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.331 [2024-04-15 16:13:59.275916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.331 [2024-04-15 16:13:59.276016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.331 [2024-04-15 16:13:59.280001] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.331 [2024-04-15 16:13:59.280173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.331 [2024-04-15 16:13:59.280307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.331 [2024-04-15 16:13:59.284217] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.331 [2024-04-15 16:13:59.284386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.331 [2024-04-15 16:13:59.284567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.331 [2024-04-15 16:13:59.288499] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.331 [2024-04-15 16:13:59.288669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.331 [2024-04-15 16:13:59.288840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.331 [2024-04-15 16:13:59.292747] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.331 [2024-04-15 16:13:59.292914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.331 [2024-04-15 16:13:59.293047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.595 [2024-04-15 16:13:59.297096] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.595 [2024-04-15 16:13:59.297258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.595 [2024-04-15 16:13:59.297369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.595 [2024-04-15 16:13:59.301436] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.595 [2024-04-15 16:13:59.301639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.595 [2024-04-15 16:13:59.301751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.595 [2024-04-15 16:13:59.305841] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.595 [2024-04-15 16:13:59.306005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.595 [2024-04-15 16:13:59.306126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.595 [2024-04-15 16:13:59.310167] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.595 [2024-04-15 16:13:59.310329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.595 [2024-04-15 16:13:59.310429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.595 [2024-04-15 16:13:59.314547] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.595 [2024-04-15 16:13:59.314752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.595 [2024-04-15 16:13:59.314857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.595 [2024-04-15 16:13:59.318904] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.595 [2024-04-15 16:13:59.319074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.595 [2024-04-15 16:13:59.319179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.595 [2024-04-15 16:13:59.323159] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.595 [2024-04-15 16:13:59.323318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.595 [2024-04-15 16:13:59.323499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.595 [2024-04-15 16:13:59.327506] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.595 [2024-04-15 16:13:59.327681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.595 [2024-04-15 16:13:59.327825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.595 [2024-04-15 16:13:59.331802] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.595 [2024-04-15 16:13:59.331960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.595 [2024-04-15 16:13:59.332087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.595 [2024-04-15 16:13:59.336089] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.595 [2024-04-15 16:13:59.336266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.595 [2024-04-15 16:13:59.336451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.595 [2024-04-15 16:13:59.340392] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.595 [2024-04-15 16:13:59.340544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.595 [2024-04-15 16:13:59.340653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.595 [2024-04-15 16:13:59.344562] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.595 [2024-04-15 16:13:59.344753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.595 [2024-04-15 16:13:59.344881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.595 [2024-04-15 16:13:59.348937] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.595 [2024-04-15 16:13:59.349106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.595 [2024-04-15 16:13:59.349209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.595 [2024-04-15 16:13:59.353145] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.595 [2024-04-15 16:13:59.353313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.595 [2024-04-15 16:13:59.353429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.595 [2024-04-15 16:13:59.357495] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.595 [2024-04-15 16:13:59.357698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.595 [2024-04-15 16:13:59.357799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.595 [2024-04-15 16:13:59.361746] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.595 [2024-04-15 16:13:59.361906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.595 [2024-04-15 16:13:59.362027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.595 [2024-04-15 16:13:59.366035] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.595 [2024-04-15 16:13:59.366202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.596 [2024-04-15 16:13:59.366305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.596 [2024-04-15 16:13:59.370358] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.596 [2024-04-15 16:13:59.370522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.596 [2024-04-15 16:13:59.370635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.596 [2024-04-15 16:13:59.374632] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.596 [2024-04-15 16:13:59.374799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.596 [2024-04-15 16:13:59.374892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.596 [2024-04-15 16:13:59.378852] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.596 [2024-04-15 16:13:59.379020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.596 [2024-04-15 16:13:59.379151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.596 [2024-04-15 16:13:59.383190] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.596 [2024-04-15 16:13:59.383351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.596 [2024-04-15 16:13:59.383468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.596 [2024-04-15 16:13:59.387387] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.596 [2024-04-15 16:13:59.387548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.596 [2024-04-15 16:13:59.387661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.596 [2024-04-15 16:13:59.391601] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.596 [2024-04-15 16:13:59.391801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.596 [2024-04-15 16:13:59.391915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.596 [2024-04-15 16:13:59.395844] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.596 [2024-04-15 16:13:59.396015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.596 [2024-04-15 16:13:59.396241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.596 [2024-04-15 16:13:59.400203] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.596 [2024-04-15 16:13:59.400364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.596 [2024-04-15 16:13:59.400538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.596 [2024-04-15 16:13:59.404548] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.596 [2024-04-15 16:13:59.404735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.596 [2024-04-15 16:13:59.404829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.596 [2024-04-15 16:13:59.408741] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.596 [2024-04-15 16:13:59.408841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.596 [2024-04-15 16:13:59.408902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.596 [2024-04-15 16:13:59.412832] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.596 [2024-04-15 16:13:59.412995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.596 [2024-04-15 16:13:59.413099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.596 [2024-04-15 16:13:59.417165] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.596 [2024-04-15 16:13:59.417327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.596 [2024-04-15 16:13:59.417528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.596 [2024-04-15 16:13:59.421396] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.596 [2024-04-15 16:13:59.421589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.596 [2024-04-15 16:13:59.421728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.596 [2024-04-15 16:13:59.425759] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.596 [2024-04-15 16:13:59.425923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.596 [2024-04-15 16:13:59.426027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.596 [2024-04-15 16:13:59.430144] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.596 [2024-04-15 16:13:59.430318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.596 [2024-04-15 16:13:59.430425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.596 [2024-04-15 16:13:59.434483] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.596 [2024-04-15 16:13:59.434659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.596 [2024-04-15 16:13:59.434762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.596 [2024-04-15 16:13:59.438788] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.596 [2024-04-15 16:13:59.438960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.596 [2024-04-15 16:13:59.439070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.596 [2024-04-15 16:13:59.443251] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.596 [2024-04-15 16:13:59.443416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.596 [2024-04-15 16:13:59.443538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.596 [2024-04-15 16:13:59.447541] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.596 [2024-04-15 16:13:59.447715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.596 [2024-04-15 16:13:59.447814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.596 [2024-04-15 16:13:59.451713] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.596 [2024-04-15 16:13:59.451861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.596 [2024-04-15 16:13:59.451949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.596 [2024-04-15 16:13:59.455827] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.596 [2024-04-15 16:13:59.455978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.596 [2024-04-15 16:13:59.456071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.596 [2024-04-15 16:13:59.459956] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.596 [2024-04-15 16:13:59.460124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.596 [2024-04-15 16:13:59.460222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.596 [2024-04-15 16:13:59.464054] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.596 [2024-04-15 16:13:59.464229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.596 [2024-04-15 16:13:59.464327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.596 [2024-04-15 16:13:59.468219] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.596 [2024-04-15 16:13:59.468392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.596 [2024-04-15 16:13:59.468521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.596 [2024-04-15 16:13:59.472405] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.596 [2024-04-15 16:13:59.472563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.596 [2024-04-15 16:13:59.472671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.596 [2024-04-15 16:13:59.476698] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.596 [2024-04-15 16:13:59.476858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.596 [2024-04-15 16:13:59.476950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.596 [2024-04-15 16:13:59.480717] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.596 [2024-04-15 16:13:59.480866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.596 [2024-04-15 16:13:59.480981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.597 [2024-04-15 16:13:59.484742] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.597 [2024-04-15 16:13:59.484899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.597 [2024-04-15 16:13:59.484989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.597 [2024-04-15 16:13:59.488702] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.597 [2024-04-15 16:13:59.488866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.597 [2024-04-15 16:13:59.489009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.597 [2024-04-15 16:13:59.492897] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.597 [2024-04-15 16:13:59.493079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.597 [2024-04-15 16:13:59.493222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.597 [2024-04-15 16:13:59.497049] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.597 [2024-04-15 16:13:59.497216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.597 [2024-04-15 16:13:59.497308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.597 [2024-04-15 16:13:59.501143] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.597 [2024-04-15 16:13:59.501292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.597 [2024-04-15 16:13:59.501383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.597 [2024-04-15 16:13:59.505183] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.597 [2024-04-15 16:13:59.505334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.597 [2024-04-15 16:13:59.505441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.597 [2024-04-15 16:13:59.509293] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.597 [2024-04-15 16:13:59.509465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.597 [2024-04-15 16:13:59.509624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.597 [2024-04-15 16:13:59.513449] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.597 [2024-04-15 16:13:59.513647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.597 [2024-04-15 16:13:59.513748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.597 [2024-04-15 16:13:59.517689] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.597 [2024-04-15 16:13:59.517847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.597 [2024-04-15 16:13:59.517980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.597 [2024-04-15 16:13:59.521803] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.597 [2024-04-15 16:13:59.521970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.597 [2024-04-15 16:13:59.522071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.597 [2024-04-15 16:13:59.525821] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.597 [2024-04-15 16:13:59.525993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.597 [2024-04-15 16:13:59.526103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.597 [2024-04-15 16:13:59.529946] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.597 [2024-04-15 16:13:59.530106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.597 [2024-04-15 16:13:59.530202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.597 [2024-04-15 16:13:59.534003] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.597 [2024-04-15 16:13:59.534156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.597 [2024-04-15 16:13:59.534288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.597 [2024-04-15 16:13:59.538097] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.597 [2024-04-15 16:13:59.538261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.597 [2024-04-15 16:13:59.538364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.597 [2024-04-15 16:13:59.542215] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.597 [2024-04-15 16:13:59.542371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.597 [2024-04-15 16:13:59.542461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.597 [2024-04-15 16:13:59.546271] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.597 [2024-04-15 16:13:59.546431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.597 [2024-04-15 16:13:59.546522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.597 [2024-04-15 16:13:59.550318] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.597 [2024-04-15 16:13:59.550475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.597 [2024-04-15 16:13:59.550565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.597 [2024-04-15 16:13:59.554468] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.597 [2024-04-15 16:13:59.554641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.597 [2024-04-15 16:13:59.554745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.597 [2024-04-15 16:13:59.558710] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.597 [2024-04-15 16:13:59.558882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.597 [2024-04-15 16:13:59.558982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.920 [2024-04-15 16:13:59.562993] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.920 [2024-04-15 16:13:59.563162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.920 [2024-04-15 16:13:59.563263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.920 [2024-04-15 16:13:59.567182] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.920 [2024-04-15 16:13:59.567356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.920 [2024-04-15 16:13:59.567457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.920 [2024-04-15 16:13:59.571463] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.920 [2024-04-15 16:13:59.571639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.920 [2024-04-15 16:13:59.571740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.920 [2024-04-15 16:13:59.575806] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.920 [2024-04-15 16:13:59.575976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.920 [2024-04-15 16:13:59.576078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.920 [2024-04-15 16:13:59.580152] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.920 [2024-04-15 16:13:59.580314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.920 [2024-04-15 16:13:59.580417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.920 [2024-04-15 16:13:59.584468] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.920 [2024-04-15 16:13:59.584646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.921 [2024-04-15 16:13:59.584754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.921 [2024-04-15 16:13:59.588818] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.921 [2024-04-15 16:13:59.588976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.921 [2024-04-15 16:13:59.589074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.921 [2024-04-15 16:13:59.593108] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.921 [2024-04-15 16:13:59.593294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.921 [2024-04-15 16:13:59.593434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.921 [2024-04-15 16:13:59.597490] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.921 [2024-04-15 16:13:59.597678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.921 [2024-04-15 16:13:59.597785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.921 [2024-04-15 16:13:59.601828] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.921 [2024-04-15 16:13:59.601992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.921 [2024-04-15 16:13:59.602098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.921 [2024-04-15 16:13:59.606228] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.921 [2024-04-15 16:13:59.606393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.921 [2024-04-15 16:13:59.606529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.921 [2024-04-15 16:13:59.610541] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.921 [2024-04-15 16:13:59.610739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.921 [2024-04-15 16:13:59.610923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.921 [2024-04-15 16:13:59.614861] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.921 [2024-04-15 16:13:59.615024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.921 [2024-04-15 16:13:59.615126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.921 [2024-04-15 16:13:59.619124] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.921 [2024-04-15 16:13:59.619286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.921 [2024-04-15 16:13:59.619385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.921 [2024-04-15 16:13:59.623461] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.921 [2024-04-15 16:13:59.623633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.921 [2024-04-15 16:13:59.623733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.921 [2024-04-15 16:13:59.627758] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.921 [2024-04-15 16:13:59.627922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.921 [2024-04-15 16:13:59.628022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.921 [2024-04-15 16:13:59.632341] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.921 [2024-04-15 16:13:59.632516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.921 [2024-04-15 16:13:59.632640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.921 [2024-04-15 16:13:59.636885] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.921 [2024-04-15 16:13:59.637065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.921 [2024-04-15 16:13:59.637184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.921 [2024-04-15 16:13:59.641356] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.921 [2024-04-15 16:13:59.641465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.921 [2024-04-15 16:13:59.641530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.921 [2024-04-15 16:13:59.645854] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.921 [2024-04-15 16:13:59.646024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.921 [2024-04-15 16:13:59.646146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.921 [2024-04-15 16:13:59.650472] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.921 [2024-04-15 16:13:59.650665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.921 [2024-04-15 16:13:59.650843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.921 [2024-04-15 16:13:59.655020] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.921 [2024-04-15 16:13:59.655201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.921 [2024-04-15 16:13:59.655309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.921 [2024-04-15 16:13:59.659691] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.921 [2024-04-15 16:13:59.659866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.921 [2024-04-15 16:13:59.659970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.921 [2024-04-15 16:13:59.664125] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.921 [2024-04-15 16:13:59.664294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.921 [2024-04-15 16:13:59.664430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.921 [2024-04-15 16:13:59.668502] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.921 [2024-04-15 16:13:59.668705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.921 [2024-04-15 16:13:59.668812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.921 [2024-04-15 16:13:59.672984] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.921 [2024-04-15 16:13:59.673168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.921 [2024-04-15 16:13:59.673267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.921 [2024-04-15 16:13:59.677416] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.921 [2024-04-15 16:13:59.677600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.921 [2024-04-15 16:13:59.677731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.921 [2024-04-15 16:13:59.681772] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.921 [2024-04-15 16:13:59.681949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.921 [2024-04-15 16:13:59.682053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.921 [2024-04-15 16:13:59.686318] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.921 [2024-04-15 16:13:59.686522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.921 [2024-04-15 16:13:59.686709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.921 [2024-04-15 16:13:59.690707] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.921 [2024-04-15 16:13:59.690888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.921 [2024-04-15 16:13:59.690991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.921 [2024-04-15 16:13:59.694994] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.921 [2024-04-15 16:13:59.695167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.921 [2024-04-15 16:13:59.695262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.921 [2024-04-15 16:13:59.699230] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.921 [2024-04-15 16:13:59.699389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.921 [2024-04-15 16:13:59.699509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.921 [2024-04-15 16:13:59.703637] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.921 [2024-04-15 16:13:59.703812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.921 [2024-04-15 16:13:59.703913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.921 [2024-04-15 16:13:59.708042] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.921 [2024-04-15 16:13:59.708210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.921 [2024-04-15 16:13:59.708314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.921 [2024-04-15 16:13:59.712437] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.921 [2024-04-15 16:13:59.712614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.921 [2024-04-15 16:13:59.712715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.921 [2024-04-15 16:13:59.716656] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.921 [2024-04-15 16:13:59.716822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.921 [2024-04-15 16:13:59.716921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.921 [2024-04-15 16:13:59.720940] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.921 [2024-04-15 16:13:59.721114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.921 [2024-04-15 16:13:59.721250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.921 [2024-04-15 16:13:59.725383] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.921 [2024-04-15 16:13:59.725554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.921 [2024-04-15 16:13:59.725705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.921 [2024-04-15 16:13:59.729502] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.921 [2024-04-15 16:13:59.729719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.921 [2024-04-15 16:13:59.729831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.921 [2024-04-15 16:13:59.733699] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.921 [2024-04-15 16:13:59.733874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.921 [2024-04-15 16:13:59.733983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.921 [2024-04-15 16:13:59.737955] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.921 [2024-04-15 16:13:59.738116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.921 [2024-04-15 16:13:59.738211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.921 [2024-04-15 16:13:59.742225] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.921 [2024-04-15 16:13:59.742399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.921 [2024-04-15 16:13:59.742497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.921 [2024-04-15 16:13:59.746446] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.921 [2024-04-15 16:13:59.746624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.921 [2024-04-15 16:13:59.746733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.921 [2024-04-15 16:13:59.750564] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.921 [2024-04-15 16:13:59.750743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.921 [2024-04-15 16:13:59.750850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.921 [2024-04-15 16:13:59.754737] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.921 [2024-04-15 16:13:59.754908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.921 [2024-04-15 16:13:59.755006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.921 [2024-04-15 16:13:59.758926] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.921 [2024-04-15 16:13:59.759094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.921 [2024-04-15 16:13:59.759235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.921 [2024-04-15 16:13:59.763215] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.921 [2024-04-15 16:13:59.763375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.921 [2024-04-15 16:13:59.763464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.921 [2024-04-15 16:13:59.767373] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.921 [2024-04-15 16:13:59.767526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.921 [2024-04-15 16:13:59.767636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.921 [2024-04-15 16:13:59.771469] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.921 [2024-04-15 16:13:59.771642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.921 [2024-04-15 16:13:59.771733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.921 [2024-04-15 16:13:59.775532] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.921 [2024-04-15 16:13:59.775710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.921 [2024-04-15 16:13:59.775804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.921 [2024-04-15 16:13:59.779681] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.921 [2024-04-15 16:13:59.779831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.921 [2024-04-15 16:13:59.779927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.921 [2024-04-15 16:13:59.783761] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.921 [2024-04-15 16:13:59.783929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.921 [2024-04-15 16:13:59.784021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.921 [2024-04-15 16:13:59.787916] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.921 [2024-04-15 16:13:59.788082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.921 [2024-04-15 16:13:59.788215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.921 [2024-04-15 16:13:59.792137] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.921 [2024-04-15 16:13:59.792288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.921 [2024-04-15 16:13:59.792382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.921 [2024-04-15 16:13:59.796261] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.921 [2024-04-15 16:13:59.796423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.921 [2024-04-15 16:13:59.796519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.921 [2024-04-15 16:13:59.800579] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.921 [2024-04-15 16:13:59.800750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.921 [2024-04-15 16:13:59.800849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.921 [2024-04-15 16:13:59.804654] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.921 [2024-04-15 16:13:59.804805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.921 [2024-04-15 16:13:59.804894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.921 [2024-04-15 16:13:59.808753] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.921 [2024-04-15 16:13:59.808903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.922 [2024-04-15 16:13:59.808999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.922 [2024-04-15 16:13:59.812776] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.922 [2024-04-15 16:13:59.812926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.922 [2024-04-15 16:13:59.813014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.922 [2024-04-15 16:13:59.816755] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.922 [2024-04-15 16:13:59.816902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.922 [2024-04-15 16:13:59.816992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.922 [2024-04-15 16:13:59.820785] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.922 [2024-04-15 16:13:59.820940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.922 [2024-04-15 16:13:59.821047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.922 [2024-04-15 16:13:59.824840] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.922 [2024-04-15 16:13:59.825002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.922 [2024-04-15 16:13:59.825114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.922 [2024-04-15 16:13:59.828950] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.922 [2024-04-15 16:13:59.829112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.922 [2024-04-15 16:13:59.829209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.922 [2024-04-15 16:13:59.832980] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.922 [2024-04-15 16:13:59.833141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.922 [2024-04-15 16:13:59.833272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.922 [2024-04-15 16:13:59.837073] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.922 [2024-04-15 16:13:59.837222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.922 [2024-04-15 16:13:59.837311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.922 [2024-04-15 16:13:59.841278] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.922 [2024-04-15 16:13:59.841439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.922 [2024-04-15 16:13:59.841545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.922 [2024-04-15 16:13:59.845627] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.922 [2024-04-15 16:13:59.845791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.922 [2024-04-15 16:13:59.845894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.922 [2024-04-15 16:13:59.850020] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.922 [2024-04-15 16:13:59.850185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.922 [2024-04-15 16:13:59.850279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.922 [2024-04-15 16:13:59.854328] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.922 [2024-04-15 16:13:59.854509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.922 [2024-04-15 16:13:59.854622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.922 [2024-04-15 16:13:59.858697] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.922 [2024-04-15 16:13:59.858891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.922 [2024-04-15 16:13:59.859002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.922 [2024-04-15 16:13:59.863099] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.922 [2024-04-15 16:13:59.863300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.922 [2024-04-15 16:13:59.863460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:29.922 [2024-04-15 16:13:59.867750] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.922 [2024-04-15 16:13:59.867994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.922 [2024-04-15 16:13:59.868201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:29.922 [2024-04-15 16:13:59.872262] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.922 [2024-04-15 16:13:59.872454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.922 [2024-04-15 16:13:59.872647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.922 [2024-04-15 16:13:59.876750] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.922 [2024-04-15 16:13:59.876947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.922 [2024-04-15 16:13:59.877103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:29.922 [2024-04-15 16:13:59.881273] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:29.922 [2024-04-15 16:13:59.881478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.922 [2024-04-15 16:13:59.881721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:30.182 [2024-04-15 16:13:59.886031] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.182 [2024-04-15 16:13:59.886251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.182 [2024-04-15 16:13:59.886362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:30.182 [2024-04-15 16:13:59.890673] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.182 [2024-04-15 16:13:59.890875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.182 [2024-04-15 16:13:59.890979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.182 [2024-04-15 16:13:59.895056] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.182 [2024-04-15 16:13:59.895264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.182 [2024-04-15 16:13:59.895424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:30.182 [2024-04-15 16:13:59.899530] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.182 [2024-04-15 16:13:59.899737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.182 [2024-04-15 16:13:59.899835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:30.182 [2024-04-15 16:13:59.903948] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.182 [2024-04-15 16:13:59.904129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.182 [2024-04-15 16:13:59.904219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:30.182 [2024-04-15 16:13:59.908227] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.182 [2024-04-15 16:13:59.908406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.182 [2024-04-15 16:13:59.908500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.182 [2024-04-15 16:13:59.912700] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.182 [2024-04-15 16:13:59.912915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.182 [2024-04-15 16:13:59.913062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:30.182 [2024-04-15 16:13:59.917229] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.182 [2024-04-15 16:13:59.917413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.182 [2024-04-15 16:13:59.917520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:30.182 [2024-04-15 16:13:59.921501] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.182 [2024-04-15 16:13:59.921749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.182 [2024-04-15 16:13:59.921864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:30.182 [2024-04-15 16:13:59.925874] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.182 [2024-04-15 16:13:59.926063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.182 [2024-04-15 16:13:59.926167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.182 [2024-04-15 16:13:59.930145] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.182 [2024-04-15 16:13:59.930313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.182 [2024-04-15 16:13:59.930451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:30.182 [2024-04-15 16:13:59.934744] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.182 [2024-04-15 16:13:59.934920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.182 [2024-04-15 16:13:59.935113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:30.183 [2024-04-15 16:13:59.938996] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.183 [2024-04-15 16:13:59.939166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.183 [2024-04-15 16:13:59.939266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:30.183 [2024-04-15 16:13:59.943121] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.183 [2024-04-15 16:13:59.943291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.183 [2024-04-15 16:13:59.943386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.183 [2024-04-15 16:13:59.947345] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.183 [2024-04-15 16:13:59.947512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.183 [2024-04-15 16:13:59.947614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:30.183 [2024-04-15 16:13:59.951469] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.183 [2024-04-15 16:13:59.951663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.183 [2024-04-15 16:13:59.951760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:30.183 [2024-04-15 16:13:59.955653] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.183 [2024-04-15 16:13:59.955832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.183 [2024-04-15 16:13:59.955933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:30.183 [2024-04-15 16:13:59.959775] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.183 [2024-04-15 16:13:59.959943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.183 [2024-04-15 16:13:59.960150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.183 [2024-04-15 16:13:59.964193] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.183 [2024-04-15 16:13:59.964360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.183 [2024-04-15 16:13:59.964481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:30.183 [2024-04-15 16:13:59.968308] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.183 [2024-04-15 16:13:59.968480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.183 [2024-04-15 16:13:59.968588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:30.183 [2024-04-15 16:13:59.972551] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.183 [2024-04-15 16:13:59.972738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.183 [2024-04-15 16:13:59.972908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:30.183 [2024-04-15 16:13:59.976756] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.183 [2024-04-15 16:13:59.976915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.183 [2024-04-15 16:13:59.977005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.183 [2024-04-15 16:13:59.980895] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.183 [2024-04-15 16:13:59.981057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.183 [2024-04-15 16:13:59.981145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:30.183 [2024-04-15 16:13:59.985097] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.183 [2024-04-15 16:13:59.985264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.183 [2024-04-15 16:13:59.985352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:30.183 [2024-04-15 16:13:59.989402] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.183 [2024-04-15 16:13:59.989602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.183 [2024-04-15 16:13:59.989741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:30.183 [2024-04-15 16:13:59.993785] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.183 [2024-04-15 16:13:59.993949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.183 [2024-04-15 16:13:59.994054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.183 [2024-04-15 16:13:59.997968] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.183 [2024-04-15 16:13:59.998154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.183 [2024-04-15 16:13:59.998258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:30.183 [2024-04-15 16:14:00.002329] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.183 [2024-04-15 16:14:00.002527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.183 [2024-04-15 16:14:00.002650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:30.183 [2024-04-15 16:14:00.006765] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.183 [2024-04-15 16:14:00.006946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.183 [2024-04-15 16:14:00.007047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:30.183 [2024-04-15 16:14:00.011048] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.183 [2024-04-15 16:14:00.011213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.183 [2024-04-15 16:14:00.011328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.183 [2024-04-15 16:14:00.015361] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.183 [2024-04-15 16:14:00.015541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.183 [2024-04-15 16:14:00.015651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:30.183 [2024-04-15 16:14:00.019776] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.183 [2024-04-15 16:14:00.019943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.183 [2024-04-15 16:14:00.020044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:30.183 [2024-04-15 16:14:00.024252] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.183 [2024-04-15 16:14:00.024424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.183 [2024-04-15 16:14:00.024527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:30.183 [2024-04-15 16:14:00.028701] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.183 [2024-04-15 16:14:00.028886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.183 [2024-04-15 16:14:00.028987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.183 [2024-04-15 16:14:00.033212] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.183 [2024-04-15 16:14:00.033380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.183 [2024-04-15 16:14:00.033480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:30.183 [2024-04-15 16:14:00.037522] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.183 [2024-04-15 16:14:00.037738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.183 [2024-04-15 16:14:00.037858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:30.183 [2024-04-15 16:14:00.042010] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.183 [2024-04-15 16:14:00.042180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.183 [2024-04-15 16:14:00.042280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:30.183 [2024-04-15 16:14:00.046453] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.183 [2024-04-15 16:14:00.046649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.183 [2024-04-15 16:14:00.046759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.183 [2024-04-15 16:14:00.050820] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.183 [2024-04-15 16:14:00.050987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.183 [2024-04-15 16:14:00.051085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:30.184 [2024-04-15 16:14:00.055240] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.184 [2024-04-15 16:14:00.055413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.184 [2024-04-15 16:14:00.055537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:30.184 [2024-04-15 16:14:00.059745] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.184 [2024-04-15 16:14:00.059934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.184 [2024-04-15 16:14:00.060032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:30.184 [2024-04-15 16:14:00.064174] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.184 [2024-04-15 16:14:00.064345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.184 [2024-04-15 16:14:00.064468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.184 [2024-04-15 16:14:00.068627] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.184 [2024-04-15 16:14:00.068797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.184 [2024-04-15 16:14:00.068926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:30.184 [2024-04-15 16:14:00.073106] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.184 [2024-04-15 16:14:00.073297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.184 [2024-04-15 16:14:00.073434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:30.184 [2024-04-15 16:14:00.077821] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.184 [2024-04-15 16:14:00.077999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.184 [2024-04-15 16:14:00.078117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:30.184 [2024-04-15 16:14:00.082446] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.184 [2024-04-15 16:14:00.082640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.184 [2024-04-15 16:14:00.082746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.184 [2024-04-15 16:14:00.086926] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.184 [2024-04-15 16:14:00.087183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.184 [2024-04-15 16:14:00.087550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:30.184 [2024-04-15 16:14:00.091810] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.184 [2024-04-15 16:14:00.092020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.184 [2024-04-15 16:14:00.092143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:30.184 [2024-04-15 16:14:00.096401] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.184 [2024-04-15 16:14:00.096589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.184 [2024-04-15 16:14:00.096690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:30.184 [2024-04-15 16:14:00.100760] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.184 [2024-04-15 16:14:00.100924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.184 [2024-04-15 16:14:00.101107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.184 [2024-04-15 16:14:00.105179] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.184 [2024-04-15 16:14:00.105342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.184 [2024-04-15 16:14:00.105479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:30.184 [2024-04-15 16:14:00.109772] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.184 [2024-04-15 16:14:00.109938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.184 [2024-04-15 16:14:00.110043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:30.184 [2024-04-15 16:14:00.114177] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.184 [2024-04-15 16:14:00.114346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.184 [2024-04-15 16:14:00.114448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:30.184 [2024-04-15 16:14:00.118572] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.184 [2024-04-15 16:14:00.118757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.184 [2024-04-15 16:14:00.118856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.184 [2024-04-15 16:14:00.123017] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.184 [2024-04-15 16:14:00.123184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.184 [2024-04-15 16:14:00.123285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:30.184 [2024-04-15 16:14:00.127372] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.184 [2024-04-15 16:14:00.127534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.184 [2024-04-15 16:14:00.127645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:30.184 [2024-04-15 16:14:00.131719] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.184 [2024-04-15 16:14:00.131875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.184 [2024-04-15 16:14:00.131990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:30.184 [2024-04-15 16:14:00.136456] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.184 [2024-04-15 16:14:00.136797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.184 [2024-04-15 16:14:00.136946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.184 [2024-04-15 16:14:00.141148] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.184 [2024-04-15 16:14:00.141332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.184 [2024-04-15 16:14:00.141439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:30.184 [2024-04-15 16:14:00.145673] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.184 [2024-04-15 16:14:00.145851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.184 [2024-04-15 16:14:00.145959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:30.444 [2024-04-15 16:14:00.150268] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.444 [2024-04-15 16:14:00.150447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.444 [2024-04-15 16:14:00.150551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:30.444 [2024-04-15 16:14:00.155670] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.444 [2024-04-15 16:14:00.155931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.444 [2024-04-15 16:14:00.156097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.444 [2024-04-15 16:14:00.160631] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.444 [2024-04-15 16:14:00.160866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.444 [2024-04-15 16:14:00.161080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:30.444 [2024-04-15 16:14:00.165189] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.444 [2024-04-15 16:14:00.165359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.444 [2024-04-15 16:14:00.165467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:30.444 [2024-04-15 16:14:00.169730] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.444 [2024-04-15 16:14:00.169903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.444 [2024-04-15 16:14:00.170012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:30.444 [2024-04-15 16:14:00.174238] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.444 [2024-04-15 16:14:00.174407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.444 [2024-04-15 16:14:00.174540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.444 [2024-04-15 16:14:00.178700] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.444 [2024-04-15 16:14:00.178863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.444 [2024-04-15 16:14:00.178966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:30.444 [2024-04-15 16:14:00.183005] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.444 [2024-04-15 16:14:00.183160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.444 [2024-04-15 16:14:00.183297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:30.444 [2024-04-15 16:14:00.187232] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.444 [2024-04-15 16:14:00.187384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.444 [2024-04-15 16:14:00.187516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:30.444 [2024-04-15 16:14:00.191512] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.444 [2024-04-15 16:14:00.191697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.444 [2024-04-15 16:14:00.191795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.444 [2024-04-15 16:14:00.195710] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.444 [2024-04-15 16:14:00.195920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.444 [2024-04-15 16:14:00.196051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:30.444 [2024-04-15 16:14:00.200151] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.444 [2024-04-15 16:14:00.200312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.444 [2024-04-15 16:14:00.200422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:30.444 [2024-04-15 16:14:00.204415] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.444 [2024-04-15 16:14:00.204599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.444 [2024-04-15 16:14:00.204702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:30.444 [2024-04-15 16:14:00.208827] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.444 [2024-04-15 16:14:00.208991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.444 [2024-04-15 16:14:00.209089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.444 [2024-04-15 16:14:00.213184] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.444 [2024-04-15 16:14:00.213359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.444 [2024-04-15 16:14:00.213495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:30.444 [2024-04-15 16:14:00.217593] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.444 [2024-04-15 16:14:00.217763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.444 [2024-04-15 16:14:00.217875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:30.444 [2024-04-15 16:14:00.221911] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.444 [2024-04-15 16:14:00.222074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.444 [2024-04-15 16:14:00.222198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:30.444 [2024-04-15 16:14:00.226189] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.444 [2024-04-15 16:14:00.226354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.444 [2024-04-15 16:14:00.226455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.444 [2024-04-15 16:14:00.230361] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.444 [2024-04-15 16:14:00.230523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.444 [2024-04-15 16:14:00.230713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:30.444 [2024-04-15 16:14:00.234738] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.444 [2024-04-15 16:14:00.234905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.444 [2024-04-15 16:14:00.235003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:30.444 [2024-04-15 16:14:00.238940] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.444 [2024-04-15 16:14:00.239103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.444 [2024-04-15 16:14:00.239211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:30.444 [2024-04-15 16:14:00.243235] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.444 [2024-04-15 16:14:00.243443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.444 [2024-04-15 16:14:00.243543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.444 [2024-04-15 16:14:00.247635] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.444 [2024-04-15 16:14:00.247797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.444 [2024-04-15 16:14:00.247914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:30.444 [2024-04-15 16:14:00.251890] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.444 [2024-04-15 16:14:00.252061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.444 [2024-04-15 16:14:00.252166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:30.444 [2024-04-15 16:14:00.256152] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.444 [2024-04-15 16:14:00.256322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.444 [2024-04-15 16:14:00.256454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:30.445 [2024-04-15 16:14:00.260516] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.445 [2024-04-15 16:14:00.260703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.445 [2024-04-15 16:14:00.260809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.445 [2024-04-15 16:14:00.264876] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.445 [2024-04-15 16:14:00.265057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.445 [2024-04-15 16:14:00.265202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:30.445 [2024-04-15 16:14:00.269399] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.445 [2024-04-15 16:14:00.269602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.445 [2024-04-15 16:14:00.269795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:30.445 [2024-04-15 16:14:00.273915] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.445 [2024-04-15 16:14:00.274087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.445 [2024-04-15 16:14:00.274244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:30.445 [2024-04-15 16:14:00.278324] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.445 [2024-04-15 16:14:00.278492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.445 [2024-04-15 16:14:00.278618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.445 [2024-04-15 16:14:00.282712] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.445 [2024-04-15 16:14:00.282877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.445 [2024-04-15 16:14:00.282972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:30.445 [2024-04-15 16:14:00.286994] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.445 [2024-04-15 16:14:00.287155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.445 [2024-04-15 16:14:00.287373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:30.445 [2024-04-15 16:14:00.291422] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.445 [2024-04-15 16:14:00.291602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.445 [2024-04-15 16:14:00.291793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:30.445 [2024-04-15 16:14:00.295750] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.445 [2024-04-15 16:14:00.295899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.445 [2024-04-15 16:14:00.295998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.445 [2024-04-15 16:14:00.299925] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.445 [2024-04-15 16:14:00.300078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.445 [2024-04-15 16:14:00.300174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:30.445 [2024-04-15 16:14:00.304033] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.445 [2024-04-15 16:14:00.304204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.445 [2024-04-15 16:14:00.304327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:30.445 [2024-04-15 16:14:00.308422] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.445 [2024-04-15 16:14:00.308591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.445 [2024-04-15 16:14:00.308691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:30.445 [2024-04-15 16:14:00.312670] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.445 [2024-04-15 16:14:00.312822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.445 [2024-04-15 16:14:00.312969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.445 [2024-04-15 16:14:00.316905] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.445 [2024-04-15 16:14:00.317057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.445 [2024-04-15 16:14:00.317174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:30.445 [2024-04-15 16:14:00.321105] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.445 [2024-04-15 16:14:00.321275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.445 [2024-04-15 16:14:00.321374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:30.445 [2024-04-15 16:14:00.325346] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.445 [2024-04-15 16:14:00.325496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.445 [2024-04-15 16:14:00.325658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:30.445 [2024-04-15 16:14:00.329688] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.445 [2024-04-15 16:14:00.329848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.445 [2024-04-15 16:14:00.330008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.445 [2024-04-15 16:14:00.334252] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.445 [2024-04-15 16:14:00.334427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.445 [2024-04-15 16:14:00.334558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:30.445 [2024-04-15 16:14:00.338784] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.445 [2024-04-15 16:14:00.338965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.445 [2024-04-15 16:14:00.339069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:30.445 [2024-04-15 16:14:00.343234] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.445 [2024-04-15 16:14:00.343397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.445 [2024-04-15 16:14:00.343569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:30.445 [2024-04-15 16:14:00.347671] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.445 [2024-04-15 16:14:00.347827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.445 [2024-04-15 16:14:00.347931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.445 [2024-04-15 16:14:00.351902] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ecc10) 00:21:30.445 [2024-04-15 16:14:00.352082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.445 [2024-04-15 16:14:00.352222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:30.445 00:21:30.445 Latency(us) 00:21:30.445 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:30.445 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:21:30.445 nvme0n1 : 2.00 7211.72 901.46 0.00 0.00 2215.78 1693.01 9175.04 00:21:30.445 =================================================================================================================== 00:21:30.445 Total : 7211.72 901.46 0.00 0.00 2215.78 1693.01 9175.04 00:21:30.445 0 00:21:30.445 16:14:00 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:30.445 16:14:00 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:30.445 16:14:00 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:30.445 | .driver_specific 00:21:30.445 | .nvme_error 00:21:30.445 | .status_code 00:21:30.445 | .command_transient_transport_error' 00:21:30.445 16:14:00 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:30.703 16:14:00 -- host/digest.sh@71 -- # (( 465 > 0 )) 00:21:30.703 16:14:00 -- host/digest.sh@73 -- # killprocess 91581 00:21:30.703 16:14:00 -- common/autotest_common.sh@936 -- # '[' -z 91581 ']' 00:21:30.703 16:14:00 -- common/autotest_common.sh@940 -- # kill -0 91581 00:21:30.703 16:14:00 -- common/autotest_common.sh@941 -- # uname 00:21:30.703 16:14:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:30.703 16:14:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 91581 00:21:30.961 killing process with pid 91581 00:21:30.961 Received shutdown signal, test time was about 2.000000 seconds 00:21:30.961 00:21:30.961 Latency(us) 00:21:30.961 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:30.961 =================================================================================================================== 00:21:30.961 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:30.961 16:14:00 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:30.961 16:14:00 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:30.961 16:14:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 91581' 00:21:30.961 16:14:00 -- common/autotest_common.sh@955 -- # kill 91581 00:21:30.961 16:14:00 -- common/autotest_common.sh@960 -- # wait 91581 00:21:30.961 16:14:00 -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:21:30.961 16:14:00 -- host/digest.sh@54 -- # local rw bs qd 00:21:30.961 16:14:00 -- host/digest.sh@56 -- # rw=randwrite 00:21:30.961 16:14:00 -- host/digest.sh@56 -- # bs=4096 00:21:30.961 16:14:00 -- host/digest.sh@56 -- # qd=128 00:21:30.961 16:14:00 -- host/digest.sh@58 -- # bperfpid=91634 00:21:30.961 16:14:00 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:21:30.961 16:14:00 -- host/digest.sh@60 -- # waitforlisten 91634 /var/tmp/bperf.sock 00:21:30.961 16:14:00 -- common/autotest_common.sh@817 -- # '[' -z 91634 ']' 00:21:30.961 16:14:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:30.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:30.961 16:14:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:30.961 16:14:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:30.961 16:14:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:30.961 16:14:00 -- common/autotest_common.sh@10 -- # set +x 00:21:31.219 [2024-04-15 16:14:00.938142] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:21:31.219 [2024-04-15 16:14:00.938446] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91634 ] 00:21:31.219 [2024-04-15 16:14:01.084254] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.219 [2024-04-15 16:14:01.135877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:32.152 16:14:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:32.152 16:14:01 -- common/autotest_common.sh@850 -- # return 0 00:21:32.152 16:14:01 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:32.152 16:14:01 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:32.410 16:14:02 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:32.410 16:14:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:32.410 16:14:02 -- common/autotest_common.sh@10 -- # set +x 00:21:32.410 16:14:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:32.410 16:14:02 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:32.410 16:14:02 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:32.713 nvme0n1 00:21:32.713 16:14:02 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:21:32.713 16:14:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:32.713 16:14:02 -- common/autotest_common.sh@10 -- # set +x 00:21:32.713 16:14:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:32.713 16:14:02 -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:32.713 16:14:02 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:32.713 Running I/O for 2 seconds... 00:21:32.713 [2024-04-15 16:14:02.648067] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190fef90 00:21:32.713 [2024-04-15 16:14:02.650859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.713 [2024-04-15 16:14:02.651055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.713 [2024-04-15 16:14:02.663748] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190feb58 00:21:32.713 [2024-04-15 16:14:02.666261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.713 [2024-04-15 16:14:02.666441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:32.713 [2024-04-15 16:14:02.678709] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190fe2e8 00:21:32.970 [2024-04-15 16:14:02.681157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.970 [2024-04-15 16:14:02.681326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:32.970 [2024-04-15 16:14:02.693751] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190fda78 00:21:32.970 [2024-04-15 16:14:02.696277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.970 [2024-04-15 16:14:02.696440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:32.970 [2024-04-15 16:14:02.708955] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190fd208 00:21:32.971 [2024-04-15 16:14:02.711429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.971 [2024-04-15 16:14:02.711607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:32.971 [2024-04-15 16:14:02.724328] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190fc998 00:21:32.971 [2024-04-15 16:14:02.726880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.971 [2024-04-15 16:14:02.727046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:32.971 [2024-04-15 16:14:02.739372] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190fc128 00:21:32.971 [2024-04-15 16:14:02.741690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.971 [2024-04-15 16:14:02.741854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:32.971 [2024-04-15 16:14:02.754256] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190fb8b8 00:21:32.971 [2024-04-15 16:14:02.756681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.971 [2024-04-15 16:14:02.756848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:32.971 [2024-04-15 16:14:02.769027] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190fb048 00:21:32.971 [2024-04-15 16:14:02.771351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:21204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.971 [2024-04-15 16:14:02.771505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:32.971 [2024-04-15 16:14:02.784160] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190fa7d8 00:21:32.971 [2024-04-15 16:14:02.786512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.971 [2024-04-15 16:14:02.786686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:32.971 [2024-04-15 16:14:02.799437] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190f9f68 00:21:32.971 [2024-04-15 16:14:02.801862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:11062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.971 [2024-04-15 16:14:02.802026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:32.971 [2024-04-15 16:14:02.814898] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190f96f8 00:21:32.971 [2024-04-15 16:14:02.817167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:14314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.971 [2024-04-15 16:14:02.817336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:32.971 [2024-04-15 16:14:02.829741] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190f8e88 00:21:32.971 [2024-04-15 16:14:02.831978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:21995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.971 [2024-04-15 16:14:02.832129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:32.971 [2024-04-15 16:14:02.844392] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190f8618 00:21:32.971 [2024-04-15 16:14:02.846637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.971 [2024-04-15 16:14:02.846803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:32.971 [2024-04-15 16:14:02.859183] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190f7da8 00:21:32.971 [2024-04-15 16:14:02.861321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.971 [2024-04-15 16:14:02.861475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:32.971 [2024-04-15 16:14:02.874041] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190f7538 00:21:32.971 [2024-04-15 16:14:02.876341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.971 [2024-04-15 16:14:02.876510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:32.971 [2024-04-15 16:14:02.888908] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190f6cc8 00:21:32.971 [2024-04-15 16:14:02.891128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.971 [2024-04-15 16:14:02.891287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:32.971 [2024-04-15 16:14:02.904247] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190f6458 00:21:32.971 [2024-04-15 16:14:02.906519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.971 [2024-04-15 16:14:02.906699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:32.971 [2024-04-15 16:14:02.919863] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190f5be8 00:21:32.971 [2024-04-15 16:14:02.922098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:9018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.971 [2024-04-15 16:14:02.922284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:32.971 [2024-04-15 16:14:02.935533] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190f5378 00:21:33.230 [2024-04-15 16:14:02.937803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:24515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.230 [2024-04-15 16:14:02.937972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:33.230 [2024-04-15 16:14:02.951146] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190f4b08 00:21:33.230 [2024-04-15 16:14:02.953331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:20627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.230 [2024-04-15 16:14:02.953497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:33.230 [2024-04-15 16:14:02.966542] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190f4298 00:21:33.230 [2024-04-15 16:14:02.968710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:10116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.230 [2024-04-15 16:14:02.968877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:33.230 [2024-04-15 16:14:02.982025] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190f3a28 00:21:33.230 [2024-04-15 16:14:02.984209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.230 [2024-04-15 16:14:02.984388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:33.230 [2024-04-15 16:14:02.997677] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190f31b8 00:21:33.230 [2024-04-15 16:14:02.999795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.230 [2024-04-15 16:14:02.999963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:33.230 [2024-04-15 16:14:03.013493] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190f2948 00:21:33.230 [2024-04-15 16:14:03.015735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.230 [2024-04-15 16:14:03.015915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:33.230 [2024-04-15 16:14:03.029416] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190f20d8 00:21:33.230 [2024-04-15 16:14:03.031604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:11348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.230 [2024-04-15 16:14:03.031784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:33.230 [2024-04-15 16:14:03.045011] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190f1868 00:21:33.230 [2024-04-15 16:14:03.047096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.230 [2024-04-15 16:14:03.047273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:33.230 [2024-04-15 16:14:03.060458] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190f0ff8 00:21:33.230 [2024-04-15 16:14:03.062553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.230 [2024-04-15 16:14:03.062746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:33.230 [2024-04-15 16:14:03.076049] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190f0788 00:21:33.230 [2024-04-15 16:14:03.078105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.230 [2024-04-15 16:14:03.078278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:33.230 [2024-04-15 16:14:03.091665] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190eff18 00:21:33.230 [2024-04-15 16:14:03.093697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:11094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.230 [2024-04-15 16:14:03.093875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:33.230 [2024-04-15 16:14:03.106821] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190ef6a8 00:21:33.230 [2024-04-15 16:14:03.108724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.230 [2024-04-15 16:14:03.108894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:33.230 [2024-04-15 16:14:03.121856] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190eee38 00:21:33.230 [2024-04-15 16:14:03.123785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:17794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.230 [2024-04-15 16:14:03.123938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:33.230 [2024-04-15 16:14:03.136803] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190ee5c8 00:21:33.230 [2024-04-15 16:14:03.138741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.230 [2024-04-15 16:14:03.138893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.230 [2024-04-15 16:14:03.151516] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190edd58 00:21:33.230 [2024-04-15 16:14:03.153451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.230 [2024-04-15 16:14:03.153645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:33.230 [2024-04-15 16:14:03.166903] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190ed4e8 00:21:33.230 [2024-04-15 16:14:03.168727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:20243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.230 [2024-04-15 16:14:03.168886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:33.230 [2024-04-15 16:14:03.181730] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190ecc78 00:21:33.230 [2024-04-15 16:14:03.183735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:24004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.230 [2024-04-15 16:14:03.183914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:33.489 [2024-04-15 16:14:03.197773] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190ec408 00:21:33.489 [2024-04-15 16:14:03.199849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:9029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.489 [2024-04-15 16:14:03.200044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:33.489 [2024-04-15 16:14:03.214126] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190ebb98 00:21:33.489 [2024-04-15 16:14:03.216163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.489 [2024-04-15 16:14:03.216347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:33.489 [2024-04-15 16:14:03.230093] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190eb328 00:21:33.489 [2024-04-15 16:14:03.231988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:20674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.489 [2024-04-15 16:14:03.232159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:33.489 [2024-04-15 16:14:03.245158] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190eaab8 00:21:33.489 [2024-04-15 16:14:03.247103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:8948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.489 [2024-04-15 16:14:03.247265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:33.489 [2024-04-15 16:14:03.260777] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190ea248 00:21:33.489 [2024-04-15 16:14:03.262645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:9999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.489 [2024-04-15 16:14:03.262852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:33.489 [2024-04-15 16:14:03.276675] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190e99d8 00:21:33.489 [2024-04-15 16:14:03.278517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:14682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.489 [2024-04-15 16:14:03.278704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:33.489 [2024-04-15 16:14:03.291893] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190e9168 00:21:33.489 [2024-04-15 16:14:03.293592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:3584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.489 [2024-04-15 16:14:03.293772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:33.489 [2024-04-15 16:14:03.306702] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190e88f8 00:21:33.489 [2024-04-15 16:14:03.308467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.489 [2024-04-15 16:14:03.308648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:33.489 [2024-04-15 16:14:03.321852] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190e8088 00:21:33.489 [2024-04-15 16:14:03.323654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.489 [2024-04-15 16:14:03.323839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:33.489 [2024-04-15 16:14:03.337738] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190e7818 00:21:33.489 [2024-04-15 16:14:03.339566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.489 [2024-04-15 16:14:03.339754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:33.489 [2024-04-15 16:14:03.353908] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190e6fa8 00:21:33.489 [2024-04-15 16:14:03.355720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.489 [2024-04-15 16:14:03.355903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:33.489 [2024-04-15 16:14:03.369920] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190e6738 00:21:33.489 [2024-04-15 16:14:03.371650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:18270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.489 [2024-04-15 16:14:03.371820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:33.489 [2024-04-15 16:14:03.385798] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190e5ec8 00:21:33.489 [2024-04-15 16:14:03.387515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:3710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.489 [2024-04-15 16:14:03.387730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.489 [2024-04-15 16:14:03.401229] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190e5658 00:21:33.489 [2024-04-15 16:14:03.402944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.489 [2024-04-15 16:14:03.403123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:33.489 [2024-04-15 16:14:03.416805] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190e4de8 00:21:33.489 [2024-04-15 16:14:03.418404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:18942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.489 [2024-04-15 16:14:03.418565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:33.489 [2024-04-15 16:14:03.431981] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190e4578 00:21:33.489 [2024-04-15 16:14:03.433547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.489 [2024-04-15 16:14:03.433732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:33.489 [2024-04-15 16:14:03.447392] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190e3d08 00:21:33.489 [2024-04-15 16:14:03.448949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.489 [2024-04-15 16:14:03.449109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:33.748 [2024-04-15 16:14:03.462745] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190e3498 00:21:33.748 [2024-04-15 16:14:03.464282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.748 [2024-04-15 16:14:03.464444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:33.748 [2024-04-15 16:14:03.477713] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190e2c28 00:21:33.748 [2024-04-15 16:14:03.479295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:3969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.748 [2024-04-15 16:14:03.479461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:33.748 [2024-04-15 16:14:03.493229] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190e23b8 00:21:33.748 [2024-04-15 16:14:03.494734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.748 [2024-04-15 16:14:03.494893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:33.748 [2024-04-15 16:14:03.508413] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190e1b48 00:21:33.748 [2024-04-15 16:14:03.509926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.748 [2024-04-15 16:14:03.510085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:33.748 [2024-04-15 16:14:03.523702] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190e12d8 00:21:33.748 [2024-04-15 16:14:03.525199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:8083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.748 [2024-04-15 16:14:03.525358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:33.748 [2024-04-15 16:14:03.538721] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190e0a68 00:21:33.748 [2024-04-15 16:14:03.540065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:3608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.748 [2024-04-15 16:14:03.540219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:33.748 [2024-04-15 16:14:03.553609] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190e01f8 00:21:33.748 [2024-04-15 16:14:03.555024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.748 [2024-04-15 16:14:03.555184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:33.748 [2024-04-15 16:14:03.568486] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190df988 00:21:33.748 [2024-04-15 16:14:03.569920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.748 [2024-04-15 16:14:03.570079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:33.748 [2024-04-15 16:14:03.584020] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190df118 00:21:33.748 [2024-04-15 16:14:03.585367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.748 [2024-04-15 16:14:03.585525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:33.748 [2024-04-15 16:14:03.598993] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190de8a8 00:21:33.749 [2024-04-15 16:14:03.600369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.749 [2024-04-15 16:14:03.600519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:33.749 [2024-04-15 16:14:03.613749] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190de038 00:21:33.749 [2024-04-15 16:14:03.615113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.749 [2024-04-15 16:14:03.615276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:33.749 [2024-04-15 16:14:03.634536] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190de038 00:21:33.749 [2024-04-15 16:14:03.637103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.749 [2024-04-15 16:14:03.637260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.749 [2024-04-15 16:14:03.649202] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190de8a8 00:21:33.749 [2024-04-15 16:14:03.651633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.749 [2024-04-15 16:14:03.651782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:33.749 [2024-04-15 16:14:03.663595] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190df118 00:21:33.749 [2024-04-15 16:14:03.666079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.749 [2024-04-15 16:14:03.666237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:33.749 [2024-04-15 16:14:03.678833] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190df988 00:21:33.749 [2024-04-15 16:14:03.681388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.749 [2024-04-15 16:14:03.681563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:33.749 [2024-04-15 16:14:03.694113] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190e01f8 00:21:33.749 [2024-04-15 16:14:03.696581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.749 [2024-04-15 16:14:03.696769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:33.749 [2024-04-15 16:14:03.709035] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190e0a68 00:21:33.749 [2024-04-15 16:14:03.711458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.749 [2024-04-15 16:14:03.711629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:34.008 [2024-04-15 16:14:03.724473] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190e12d8 00:21:34.008 [2024-04-15 16:14:03.726936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.008 [2024-04-15 16:14:03.727107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:34.008 [2024-04-15 16:14:03.740184] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190e1b48 00:21:34.008 [2024-04-15 16:14:03.742797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:3762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.008 [2024-04-15 16:14:03.742983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:34.008 [2024-04-15 16:14:03.756112] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190e23b8 00:21:34.008 [2024-04-15 16:14:03.758658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:17826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.008 [2024-04-15 16:14:03.758862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:34.008 [2024-04-15 16:14:03.771445] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190e2c28 00:21:34.008 [2024-04-15 16:14:03.773933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:23648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.008 [2024-04-15 16:14:03.774110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:34.008 [2024-04-15 16:14:03.786773] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190e3498 00:21:34.008 [2024-04-15 16:14:03.789200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:6610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.008 [2024-04-15 16:14:03.789370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:34.008 [2024-04-15 16:14:03.802351] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190e3d08 00:21:34.008 [2024-04-15 16:14:03.804757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:10398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.008 [2024-04-15 16:14:03.804925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:34.008 [2024-04-15 16:14:03.817703] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190e4578 00:21:34.008 [2024-04-15 16:14:03.820081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:9371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.008 [2024-04-15 16:14:03.820249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:34.008 [2024-04-15 16:14:03.833431] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190e4de8 00:21:34.008 [2024-04-15 16:14:03.835832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:10318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.008 [2024-04-15 16:14:03.836008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:34.008 [2024-04-15 16:14:03.848936] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190e5658 00:21:34.008 [2024-04-15 16:14:03.851246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.008 [2024-04-15 16:14:03.851413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:34.008 [2024-04-15 16:14:03.864295] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190e5ec8 00:21:34.008 [2024-04-15 16:14:03.866553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:11837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.008 [2024-04-15 16:14:03.866736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:34.008 [2024-04-15 16:14:03.879356] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190e6738 00:21:34.008 [2024-04-15 16:14:03.881592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.008 [2024-04-15 16:14:03.881778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:34.008 [2024-04-15 16:14:03.894153] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190e6fa8 00:21:34.008 [2024-04-15 16:14:03.896305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.008 [2024-04-15 16:14:03.896462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:34.008 [2024-04-15 16:14:03.908716] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190e7818 00:21:34.008 [2024-04-15 16:14:03.910883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:12936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.008 [2024-04-15 16:14:03.911035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:34.008 [2024-04-15 16:14:03.923120] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190e8088 00:21:34.008 [2024-04-15 16:14:03.925195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.008 [2024-04-15 16:14:03.925344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:34.008 [2024-04-15 16:14:03.937484] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190e88f8 00:21:34.008 [2024-04-15 16:14:03.939551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.008 [2024-04-15 16:14:03.939723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:34.008 [2024-04-15 16:14:03.952279] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190e9168 00:21:34.008 [2024-04-15 16:14:03.954473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:3177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.009 [2024-04-15 16:14:03.954679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:34.009 [2024-04-15 16:14:03.967394] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190e99d8 00:21:34.009 [2024-04-15 16:14:03.969445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.009 [2024-04-15 16:14:03.969648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:34.268 [2024-04-15 16:14:03.982415] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190ea248 00:21:34.268 [2024-04-15 16:14:03.984518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.268 [2024-04-15 16:14:03.984713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:34.268 [2024-04-15 16:14:03.997217] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190eaab8 00:21:34.268 [2024-04-15 16:14:03.999318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:8252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.268 [2024-04-15 16:14:03.999498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:34.268 [2024-04-15 16:14:04.012012] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190eb328 00:21:34.268 [2024-04-15 16:14:04.014042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.268 [2024-04-15 16:14:04.014212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:34.268 [2024-04-15 16:14:04.026650] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190ebb98 00:21:34.268 [2024-04-15 16:14:04.028670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:6540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.268 [2024-04-15 16:14:04.028845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:34.268 [2024-04-15 16:14:04.041492] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190ec408 00:21:34.268 [2024-04-15 16:14:04.043488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:90 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.268 [2024-04-15 16:14:04.043654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:34.268 [2024-04-15 16:14:04.056396] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190ecc78 00:21:34.268 [2024-04-15 16:14:04.058421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.268 [2024-04-15 16:14:04.058594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:34.268 [2024-04-15 16:14:04.070910] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190ed4e8 00:21:34.268 [2024-04-15 16:14:04.072904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.268 [2024-04-15 16:14:04.073061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:34.268 [2024-04-15 16:14:04.086177] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190edd58 00:21:34.268 [2024-04-15 16:14:04.088140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.268 [2024-04-15 16:14:04.088335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:34.268 [2024-04-15 16:14:04.101722] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190ee5c8 00:21:34.268 [2024-04-15 16:14:04.103717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.268 [2024-04-15 16:14:04.103888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:34.268 [2024-04-15 16:14:04.117268] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190eee38 00:21:34.268 [2024-04-15 16:14:04.119275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.268 [2024-04-15 16:14:04.119443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:34.268 [2024-04-15 16:14:04.132424] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190ef6a8 00:21:34.268 [2024-04-15 16:14:04.134370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.268 [2024-04-15 16:14:04.134561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:34.268 [2024-04-15 16:14:04.147932] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190eff18 00:21:34.268 [2024-04-15 16:14:04.149874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.268 [2024-04-15 16:14:04.150060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:34.268 [2024-04-15 16:14:04.163542] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190f0788 00:21:34.268 [2024-04-15 16:14:04.165435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:25382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.268 [2024-04-15 16:14:04.165627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:34.268 [2024-04-15 16:14:04.178847] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190f0ff8 00:21:34.268 [2024-04-15 16:14:04.180750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:9399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.268 [2024-04-15 16:14:04.180913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:34.268 [2024-04-15 16:14:04.194120] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190f1868 00:21:34.268 [2024-04-15 16:14:04.196027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:18280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.268 [2024-04-15 16:14:04.196189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:34.268 [2024-04-15 16:14:04.209344] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190f20d8 00:21:34.268 [2024-04-15 16:14:04.211205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.268 [2024-04-15 16:14:04.211368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:34.268 [2024-04-15 16:14:04.224515] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190f2948 00:21:34.268 [2024-04-15 16:14:04.226355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.268 [2024-04-15 16:14:04.226522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:34.526 [2024-04-15 16:14:04.240019] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190f31b8 00:21:34.526 [2024-04-15 16:14:04.241805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.526 [2024-04-15 16:14:04.241983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:34.526 [2024-04-15 16:14:04.255337] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190f3a28 00:21:34.526 [2024-04-15 16:14:04.257094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.526 [2024-04-15 16:14:04.257268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:34.526 [2024-04-15 16:14:04.270729] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190f4298 00:21:34.526 [2024-04-15 16:14:04.272484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.526 [2024-04-15 16:14:04.272669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:34.526 [2024-04-15 16:14:04.285822] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190f4b08 00:21:34.526 [2024-04-15 16:14:04.287527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:3958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.526 [2024-04-15 16:14:04.287717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:34.526 [2024-04-15 16:14:04.300855] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190f5378 00:21:34.526 [2024-04-15 16:14:04.302547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:12549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.526 [2024-04-15 16:14:04.302740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:34.526 [2024-04-15 16:14:04.315672] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190f5be8 00:21:34.526 [2024-04-15 16:14:04.317338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.526 [2024-04-15 16:14:04.317501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:34.526 [2024-04-15 16:14:04.330478] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190f6458 00:21:34.526 [2024-04-15 16:14:04.332150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:15485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.526 [2024-04-15 16:14:04.332309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:34.526 [2024-04-15 16:14:04.345690] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190f6cc8 00:21:34.526 [2024-04-15 16:14:04.347322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:3706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.526 [2024-04-15 16:14:04.347492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:34.526 [2024-04-15 16:14:04.360782] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190f7538 00:21:34.526 [2024-04-15 16:14:04.362387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.526 [2024-04-15 16:14:04.362561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:34.527 [2024-04-15 16:14:04.376132] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190f7da8 00:21:34.527 [2024-04-15 16:14:04.377760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:23374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.527 [2024-04-15 16:14:04.377928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:34.527 [2024-04-15 16:14:04.391444] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190f8618 00:21:34.527 [2024-04-15 16:14:04.393033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:6107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.527 [2024-04-15 16:14:04.393196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:34.527 [2024-04-15 16:14:04.406430] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190f8e88 00:21:34.527 [2024-04-15 16:14:04.407993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.527 [2024-04-15 16:14:04.408158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:34.527 [2024-04-15 16:14:04.421723] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190f96f8 00:21:34.527 [2024-04-15 16:14:04.423305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:6302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.527 [2024-04-15 16:14:04.423522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:34.527 [2024-04-15 16:14:04.437199] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190f9f68 00:21:34.527 [2024-04-15 16:14:04.438740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.527 [2024-04-15 16:14:04.438910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:34.527 [2024-04-15 16:14:04.452465] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190fa7d8 00:21:34.527 [2024-04-15 16:14:04.454021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:16028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.527 [2024-04-15 16:14:04.454208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:34.527 [2024-04-15 16:14:04.467774] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190fb048 00:21:34.527 [2024-04-15 16:14:04.469274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:18731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.527 [2024-04-15 16:14:04.469437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:34.527 [2024-04-15 16:14:04.483263] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190fb8b8 00:21:34.527 [2024-04-15 16:14:04.484739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:18416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.527 [2024-04-15 16:14:04.484907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:34.784 [2024-04-15 16:14:04.498618] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190fc128 00:21:34.784 [2024-04-15 16:14:04.500078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.784 [2024-04-15 16:14:04.500283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:34.784 [2024-04-15 16:14:04.514157] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190fc998 00:21:34.784 [2024-04-15 16:14:04.515567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.784 [2024-04-15 16:14:04.515760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:34.784 [2024-04-15 16:14:04.529492] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190fd208 00:21:34.784 [2024-04-15 16:14:04.530977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.784 [2024-04-15 16:14:04.531138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:34.784 [2024-04-15 16:14:04.544813] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190fda78 00:21:34.784 [2024-04-15 16:14:04.546256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.784 [2024-04-15 16:14:04.546449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:34.784 [2024-04-15 16:14:04.560392] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190fe2e8 00:21:34.784 [2024-04-15 16:14:04.561780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:8560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.784 [2024-04-15 16:14:04.561951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:34.784 [2024-04-15 16:14:04.575199] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190feb58 00:21:34.784 [2024-04-15 16:14:04.576464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.784 [2024-04-15 16:14:04.576636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:34.784 [2024-04-15 16:14:04.596123] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190fef90 00:21:34.784 [2024-04-15 16:14:04.598712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.784 [2024-04-15 16:14:04.598873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.784 [2024-04-15 16:14:04.611473] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190feb58 00:21:34.784 [2024-04-15 16:14:04.614089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.784 [2024-04-15 16:14:04.614251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:34.784 [2024-04-15 16:14:04.626839] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67c00) with pdu=0x2000190fe2e8 00:21:34.784 [2024-04-15 16:14:04.629271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.784 [2024-04-15 16:14:04.629428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:34.784 00:21:34.784 Latency(us) 00:21:34.784 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.785 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:34.785 nvme0n1 : 2.01 16565.02 64.71 0.00 0.00 7720.76 6865.68 29959.31 00:21:34.785 =================================================================================================================== 00:21:34.785 Total : 16565.02 64.71 0.00 0.00 7720.76 6865.68 29959.31 00:21:34.785 0 00:21:34.785 16:14:04 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:34.785 16:14:04 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:34.785 16:14:04 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:34.785 16:14:04 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:34.785 | .driver_specific 00:21:34.785 | .nvme_error 00:21:34.785 | .status_code 00:21:34.785 | .command_transient_transport_error' 00:21:35.042 16:14:04 -- host/digest.sh@71 -- # (( 130 > 0 )) 00:21:35.043 16:14:04 -- host/digest.sh@73 -- # killprocess 91634 00:21:35.043 16:14:04 -- common/autotest_common.sh@936 -- # '[' -z 91634 ']' 00:21:35.043 16:14:04 -- common/autotest_common.sh@940 -- # kill -0 91634 00:21:35.043 16:14:04 -- common/autotest_common.sh@941 -- # uname 00:21:35.043 16:14:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:35.043 16:14:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 91634 00:21:35.043 killing process with pid 91634 00:21:35.043 Received shutdown signal, test time was about 2.000000 seconds 00:21:35.043 00:21:35.043 Latency(us) 00:21:35.043 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:35.043 =================================================================================================================== 00:21:35.043 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:35.043 16:14:04 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:35.043 16:14:04 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:35.043 16:14:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 91634' 00:21:35.043 16:14:04 -- common/autotest_common.sh@955 -- # kill 91634 00:21:35.043 16:14:04 -- common/autotest_common.sh@960 -- # wait 91634 00:21:35.301 16:14:05 -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:21:35.301 16:14:05 -- host/digest.sh@54 -- # local rw bs qd 00:21:35.301 16:14:05 -- host/digest.sh@56 -- # rw=randwrite 00:21:35.301 16:14:05 -- host/digest.sh@56 -- # bs=131072 00:21:35.301 16:14:05 -- host/digest.sh@56 -- # qd=16 00:21:35.301 16:14:05 -- host/digest.sh@58 -- # bperfpid=91689 00:21:35.301 16:14:05 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:21:35.301 16:14:05 -- host/digest.sh@60 -- # waitforlisten 91689 /var/tmp/bperf.sock 00:21:35.301 16:14:05 -- common/autotest_common.sh@817 -- # '[' -z 91689 ']' 00:21:35.301 16:14:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:35.301 16:14:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:35.301 16:14:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:35.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:35.301 16:14:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:35.301 16:14:05 -- common/autotest_common.sh@10 -- # set +x 00:21:35.301 [2024-04-15 16:14:05.209359] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:21:35.301 [2024-04-15 16:14:05.209680] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91689 ] 00:21:35.301 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:35.301 Zero copy mechanism will not be used. 00:21:35.559 [2024-04-15 16:14:05.356543] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.559 [2024-04-15 16:14:05.405159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:36.502 16:14:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:36.502 16:14:06 -- common/autotest_common.sh@850 -- # return 0 00:21:36.502 16:14:06 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:36.502 16:14:06 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:36.502 16:14:06 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:36.502 16:14:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:36.502 16:14:06 -- common/autotest_common.sh@10 -- # set +x 00:21:36.502 16:14:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:36.502 16:14:06 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:36.502 16:14:06 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:36.760 nvme0n1 00:21:36.760 16:14:06 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:21:36.760 16:14:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:36.760 16:14:06 -- common/autotest_common.sh@10 -- # set +x 00:21:36.760 16:14:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:36.760 16:14:06 -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:36.760 16:14:06 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:37.021 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:37.021 Zero copy mechanism will not be used. 00:21:37.021 Running I/O for 2 seconds... 00:21:37.021 [2024-04-15 16:14:06.840301] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.021 [2024-04-15 16:14:06.840826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.021 [2024-04-15 16:14:06.841006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:37.021 [2024-04-15 16:14:06.844980] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.021 [2024-04-15 16:14:06.845230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.021 [2024-04-15 16:14:06.845456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:37.021 [2024-04-15 16:14:06.849508] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.021 [2024-04-15 16:14:06.849748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.021 [2024-04-15 16:14:06.849947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:37.021 [2024-04-15 16:14:06.853859] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.021 [2024-04-15 16:14:06.854052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.021 [2024-04-15 16:14:06.854231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.021 [2024-04-15 16:14:06.858247] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.021 [2024-04-15 16:14:06.858442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.021 [2024-04-15 16:14:06.858632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:37.021 [2024-04-15 16:14:06.862648] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.021 [2024-04-15 16:14:06.862864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.021 [2024-04-15 16:14:06.863003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:37.021 [2024-04-15 16:14:06.866914] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.021 [2024-04-15 16:14:06.867103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.021 [2024-04-15 16:14:06.867268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:37.021 [2024-04-15 16:14:06.871218] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.021 [2024-04-15 16:14:06.871402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.021 [2024-04-15 16:14:06.871682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.021 [2024-04-15 16:14:06.875623] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.021 [2024-04-15 16:14:06.875841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.021 [2024-04-15 16:14:06.875977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:37.021 [2024-04-15 16:14:06.879917] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.021 [2024-04-15 16:14:06.880138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.021 [2024-04-15 16:14:06.880322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:37.021 [2024-04-15 16:14:06.884331] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.021 [2024-04-15 16:14:06.884534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.021 [2024-04-15 16:14:06.884728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:37.021 [2024-04-15 16:14:06.888569] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.021 [2024-04-15 16:14:06.888778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.021 [2024-04-15 16:14:06.888915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.021 [2024-04-15 16:14:06.892784] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.021 [2024-04-15 16:14:06.893006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.021 [2024-04-15 16:14:06.893142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:37.021 [2024-04-15 16:14:06.897109] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.021 [2024-04-15 16:14:06.897298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.021 [2024-04-15 16:14:06.897505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:37.021 [2024-04-15 16:14:06.901521] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.021 [2024-04-15 16:14:06.901735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.021 [2024-04-15 16:14:06.901869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:37.022 [2024-04-15 16:14:06.905528] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.022 [2024-04-15 16:14:06.905936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.022 [2024-04-15 16:14:06.906144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.022 [2024-04-15 16:14:06.909418] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.022 [2024-04-15 16:14:06.909642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.022 [2024-04-15 16:14:06.909794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:37.022 [2024-04-15 16:14:06.913623] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.022 [2024-04-15 16:14:06.913827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.022 [2024-04-15 16:14:06.913975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:37.022 [2024-04-15 16:14:06.917919] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.022 [2024-04-15 16:14:06.918111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.022 [2024-04-15 16:14:06.918266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:37.022 [2024-04-15 16:14:06.922334] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.022 [2024-04-15 16:14:06.922581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.022 [2024-04-15 16:14:06.922735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.022 [2024-04-15 16:14:06.926724] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.022 [2024-04-15 16:14:06.926909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.022 [2024-04-15 16:14:06.927044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:37.022 [2024-04-15 16:14:06.931127] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.022 [2024-04-15 16:14:06.931328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.022 [2024-04-15 16:14:06.931532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:37.022 [2024-04-15 16:14:06.935449] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.022 [2024-04-15 16:14:06.935667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.022 [2024-04-15 16:14:06.935818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:37.022 [2024-04-15 16:14:06.939708] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.022 [2024-04-15 16:14:06.939937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.022 [2024-04-15 16:14:06.940091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.022 [2024-04-15 16:14:06.944194] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.022 [2024-04-15 16:14:06.944407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.022 [2024-04-15 16:14:06.944589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:37.022 [2024-04-15 16:14:06.948665] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.022 [2024-04-15 16:14:06.948892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.022 [2024-04-15 16:14:06.949039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:37.022 [2024-04-15 16:14:06.952835] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.022 [2024-04-15 16:14:06.953221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.022 [2024-04-15 16:14:06.953410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:37.022 [2024-04-15 16:14:06.957023] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.022 [2024-04-15 16:14:06.957409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.022 [2024-04-15 16:14:06.957621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.022 [2024-04-15 16:14:06.961473] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.022 [2024-04-15 16:14:06.961702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.022 [2024-04-15 16:14:06.961855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:37.022 [2024-04-15 16:14:06.965925] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.022 [2024-04-15 16:14:06.966126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.022 [2024-04-15 16:14:06.966266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:37.022 [2024-04-15 16:14:06.970469] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.022 [2024-04-15 16:14:06.970688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.022 [2024-04-15 16:14:06.970896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:37.022 [2024-04-15 16:14:06.975019] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.022 [2024-04-15 16:14:06.975236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.022 [2024-04-15 16:14:06.975458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.022 [2024-04-15 16:14:06.979642] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.022 [2024-04-15 16:14:06.979862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.022 [2024-04-15 16:14:06.980006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:37.022 [2024-04-15 16:14:06.984110] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.022 [2024-04-15 16:14:06.984307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.022 [2024-04-15 16:14:06.984464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:37.283 [2024-04-15 16:14:06.988746] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.283 [2024-04-15 16:14:06.988981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-04-15 16:14:06.989125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:37.283 [2024-04-15 16:14:06.993385] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.283 [2024-04-15 16:14:06.993619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-04-15 16:14:06.993763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.283 [2024-04-15 16:14:06.997918] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.283 [2024-04-15 16:14:06.998112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-04-15 16:14:06.998255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:37.283 [2024-04-15 16:14:07.002471] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.283 [2024-04-15 16:14:07.002708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-04-15 16:14:07.002891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:37.283 [2024-04-15 16:14:07.007139] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.283 [2024-04-15 16:14:07.007390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-04-15 16:14:07.007685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:37.283 [2024-04-15 16:14:07.011421] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.283 [2024-04-15 16:14:07.011846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-04-15 16:14:07.011996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.283 [2024-04-15 16:14:07.015718] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.283 [2024-04-15 16:14:07.016131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-04-15 16:14:07.016286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:37.283 [2024-04-15 16:14:07.020250] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.283 [2024-04-15 16:14:07.020463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-04-15 16:14:07.020646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:37.283 [2024-04-15 16:14:07.024677] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.283 [2024-04-15 16:14:07.024896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-04-15 16:14:07.025037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:37.283 [2024-04-15 16:14:07.029007] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.283 [2024-04-15 16:14:07.029217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-04-15 16:14:07.029370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.283 [2024-04-15 16:14:07.033426] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.283 [2024-04-15 16:14:07.033681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-04-15 16:14:07.033831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:37.283 [2024-04-15 16:14:07.037756] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.283 [2024-04-15 16:14:07.037966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-04-15 16:14:07.038106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:37.283 [2024-04-15 16:14:07.041926] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.283 [2024-04-15 16:14:07.042142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-04-15 16:14:07.042284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:37.283 [2024-04-15 16:14:07.046369] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.283 [2024-04-15 16:14:07.046570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-04-15 16:14:07.046833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.283 [2024-04-15 16:14:07.050911] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.283 [2024-04-15 16:14:07.051187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-04-15 16:14:07.051337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:37.283 [2024-04-15 16:14:07.055147] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.283 [2024-04-15 16:14:07.055346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-04-15 16:14:07.055539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:37.283 [2024-04-15 16:14:07.059789] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.283 [2024-04-15 16:14:07.060218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-04-15 16:14:07.060364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:37.283 [2024-04-15 16:14:07.064209] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.283 [2024-04-15 16:14:07.064432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-04-15 16:14:07.064629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.283 [2024-04-15 16:14:07.068730] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.283 [2024-04-15 16:14:07.068928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-04-15 16:14:07.069095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:37.283 [2024-04-15 16:14:07.073171] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.283 [2024-04-15 16:14:07.073374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-04-15 16:14:07.073540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:37.283 [2024-04-15 16:14:07.077747] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.283 [2024-04-15 16:14:07.077952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-04-15 16:14:07.078136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:37.283 [2024-04-15 16:14:07.082180] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.283 [2024-04-15 16:14:07.082389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-04-15 16:14:07.082557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.283 [2024-04-15 16:14:07.086545] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.283 [2024-04-15 16:14:07.086796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-04-15 16:14:07.086940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:37.284 [2024-04-15 16:14:07.090903] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.284 [2024-04-15 16:14:07.091121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-04-15 16:14:07.091261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:37.284 [2024-04-15 16:14:07.095383] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.284 [2024-04-15 16:14:07.095624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-04-15 16:14:07.095758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:37.284 [2024-04-15 16:14:07.099812] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.284 [2024-04-15 16:14:07.100014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-04-15 16:14:07.100156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.284 [2024-04-15 16:14:07.104268] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.284 [2024-04-15 16:14:07.104485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-04-15 16:14:07.104646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:37.284 [2024-04-15 16:14:07.108244] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.284 [2024-04-15 16:14:07.108633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-04-15 16:14:07.108846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:37.284 [2024-04-15 16:14:07.112346] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.284 [2024-04-15 16:14:07.112707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-04-15 16:14:07.112865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:37.284 [2024-04-15 16:14:07.116837] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.284 [2024-04-15 16:14:07.117051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-04-15 16:14:07.117206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.284 [2024-04-15 16:14:07.121558] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.284 [2024-04-15 16:14:07.121785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-04-15 16:14:07.121977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:37.284 [2024-04-15 16:14:07.126312] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.284 [2024-04-15 16:14:07.126512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-04-15 16:14:07.126702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:37.284 [2024-04-15 16:14:07.131124] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.284 [2024-04-15 16:14:07.131362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-04-15 16:14:07.131500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:37.284 [2024-04-15 16:14:07.135760] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.284 [2024-04-15 16:14:07.136020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-04-15 16:14:07.136173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.284 [2024-04-15 16:14:07.140192] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.284 [2024-04-15 16:14:07.140380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-04-15 16:14:07.140569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:37.284 [2024-04-15 16:14:07.144332] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.284 [2024-04-15 16:14:07.144517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-04-15 16:14:07.144705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:37.284 [2024-04-15 16:14:07.148631] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.284 [2024-04-15 16:14:07.148817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-04-15 16:14:07.148952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:37.284 [2024-04-15 16:14:07.152875] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.284 [2024-04-15 16:14:07.153078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-04-15 16:14:07.153246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.284 [2024-04-15 16:14:07.157297] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.284 [2024-04-15 16:14:07.157489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-04-15 16:14:07.157674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:37.284 [2024-04-15 16:14:07.162012] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.284 [2024-04-15 16:14:07.162264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-04-15 16:14:07.162418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:37.284 [2024-04-15 16:14:07.166552] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.284 [2024-04-15 16:14:07.166805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-04-15 16:14:07.166942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:37.284 [2024-04-15 16:14:07.170853] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.284 [2024-04-15 16:14:07.171197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-04-15 16:14:07.171346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.284 [2024-04-15 16:14:07.174922] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.284 [2024-04-15 16:14:07.175301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-04-15 16:14:07.175538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:37.284 [2024-04-15 16:14:07.179411] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.284 [2024-04-15 16:14:07.179627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-04-15 16:14:07.179891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:37.284 [2024-04-15 16:14:07.183804] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.284 [2024-04-15 16:14:07.184030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-04-15 16:14:07.184185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:37.284 [2024-04-15 16:14:07.188295] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.284 [2024-04-15 16:14:07.188490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-04-15 16:14:07.188691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.284 [2024-04-15 16:14:07.192735] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.284 [2024-04-15 16:14:07.192960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-04-15 16:14:07.193139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:37.284 [2024-04-15 16:14:07.197328] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.284 [2024-04-15 16:14:07.197546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-04-15 16:14:07.197848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:37.284 [2024-04-15 16:14:07.201708] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.284 [2024-04-15 16:14:07.201903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-04-15 16:14:07.202044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:37.284 [2024-04-15 16:14:07.205993] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.284 [2024-04-15 16:14:07.206186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-04-15 16:14:07.206329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.284 [2024-04-15 16:14:07.210343] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.285 [2024-04-15 16:14:07.210556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.285 [2024-04-15 16:14:07.210718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:37.285 [2024-04-15 16:14:07.214828] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.285 [2024-04-15 16:14:07.215019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.285 [2024-04-15 16:14:07.215175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:37.285 [2024-04-15 16:14:07.219355] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.285 [2024-04-15 16:14:07.219557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.285 [2024-04-15 16:14:07.219740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:37.285 [2024-04-15 16:14:07.223393] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.285 [2024-04-15 16:14:07.223764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.285 [2024-04-15 16:14:07.223985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.285 [2024-04-15 16:14:07.227626] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.285 [2024-04-15 16:14:07.227995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.285 [2024-04-15 16:14:07.228188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:37.285 [2024-04-15 16:14:07.232098] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.285 [2024-04-15 16:14:07.232307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.285 [2024-04-15 16:14:07.232504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:37.285 [2024-04-15 16:14:07.236530] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.285 [2024-04-15 16:14:07.236740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.285 [2024-04-15 16:14:07.236936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:37.285 [2024-04-15 16:14:07.241016] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.285 [2024-04-15 16:14:07.241235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.285 [2024-04-15 16:14:07.241565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.285 [2024-04-15 16:14:07.245314] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.285 [2024-04-15 16:14:07.245530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.285 [2024-04-15 16:14:07.245849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:37.545 [2024-04-15 16:14:07.249981] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.545 [2024-04-15 16:14:07.250191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.545 [2024-04-15 16:14:07.250337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:37.545 [2024-04-15 16:14:07.254474] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.545 [2024-04-15 16:14:07.254690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.545 [2024-04-15 16:14:07.254834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:37.545 [2024-04-15 16:14:07.259112] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.545 [2024-04-15 16:14:07.259315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.545 [2024-04-15 16:14:07.259494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.545 [2024-04-15 16:14:07.263542] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.545 [2024-04-15 16:14:07.263763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.545 [2024-04-15 16:14:07.263905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:37.545 [2024-04-15 16:14:07.267816] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.545 [2024-04-15 16:14:07.268024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.545 [2024-04-15 16:14:07.268174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:37.545 [2024-04-15 16:14:07.272287] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.545 [2024-04-15 16:14:07.272493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.545 [2024-04-15 16:14:07.272682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:37.545 [2024-04-15 16:14:07.276293] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.545 [2024-04-15 16:14:07.276628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.545 [2024-04-15 16:14:07.276801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.545 [2024-04-15 16:14:07.280352] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.545 [2024-04-15 16:14:07.280717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.545 [2024-04-15 16:14:07.280865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:37.545 [2024-04-15 16:14:07.284617] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.545 [2024-04-15 16:14:07.284815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.545 [2024-04-15 16:14:07.285042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:37.545 [2024-04-15 16:14:07.289318] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.545 [2024-04-15 16:14:07.289520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.545 [2024-04-15 16:14:07.289779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:37.545 [2024-04-15 16:14:07.293808] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.546 [2024-04-15 16:14:07.294012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.546 [2024-04-15 16:14:07.294152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.546 [2024-04-15 16:14:07.298090] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.546 [2024-04-15 16:14:07.298297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.546 [2024-04-15 16:14:07.298439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:37.546 [2024-04-15 16:14:07.302551] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.546 [2024-04-15 16:14:07.302766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.546 [2024-04-15 16:14:07.302933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:37.546 [2024-04-15 16:14:07.307153] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.546 [2024-04-15 16:14:07.307352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.546 [2024-04-15 16:14:07.307496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:37.546 [2024-04-15 16:14:07.311718] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.546 [2024-04-15 16:14:07.311954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.546 [2024-04-15 16:14:07.312192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.546 [2024-04-15 16:14:07.316302] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.546 [2024-04-15 16:14:07.316514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.546 [2024-04-15 16:14:07.316693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:37.546 [2024-04-15 16:14:07.320661] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.546 [2024-04-15 16:14:07.320848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.546 [2024-04-15 16:14:07.321027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:37.546 [2024-04-15 16:14:07.324859] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.546 [2024-04-15 16:14:07.325056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.546 [2024-04-15 16:14:07.325211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:37.546 [2024-04-15 16:14:07.329074] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.546 [2024-04-15 16:14:07.329406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.546 [2024-04-15 16:14:07.329657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.546 [2024-04-15 16:14:07.332905] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.546 [2024-04-15 16:14:07.333112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.546 [2024-04-15 16:14:07.333256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:37.546 [2024-04-15 16:14:07.336863] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.546 [2024-04-15 16:14:07.337070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.546 [2024-04-15 16:14:07.337235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:37.546 [2024-04-15 16:14:07.340901] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.546 [2024-04-15 16:14:07.341094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.546 [2024-04-15 16:14:07.341229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:37.546 [2024-04-15 16:14:07.344793] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.546 [2024-04-15 16:14:07.345008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.546 [2024-04-15 16:14:07.345157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.546 [2024-04-15 16:14:07.348661] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.546 [2024-04-15 16:14:07.348846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.546 [2024-04-15 16:14:07.348979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:37.546 [2024-04-15 16:14:07.352497] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.546 [2024-04-15 16:14:07.352701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.546 [2024-04-15 16:14:07.352871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:37.546 [2024-04-15 16:14:07.356186] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.546 [2024-04-15 16:14:07.356393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.546 [2024-04-15 16:14:07.356564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:37.546 [2024-04-15 16:14:07.360047] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.546 [2024-04-15 16:14:07.360228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.546 [2024-04-15 16:14:07.360366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.546 [2024-04-15 16:14:07.363696] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.546 [2024-04-15 16:14:07.363898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.546 [2024-04-15 16:14:07.364046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:37.546 [2024-04-15 16:14:07.367641] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.546 [2024-04-15 16:14:07.367857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.546 [2024-04-15 16:14:07.368010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:37.546 [2024-04-15 16:14:07.371631] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.546 [2024-04-15 16:14:07.371870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.546 [2024-04-15 16:14:07.372011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:37.546 [2024-04-15 16:14:07.375510] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.546 [2024-04-15 16:14:07.375726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.546 [2024-04-15 16:14:07.375895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.546 [2024-04-15 16:14:07.379366] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.546 [2024-04-15 16:14:07.379552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.546 [2024-04-15 16:14:07.379728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:37.546 [2024-04-15 16:14:07.383123] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.546 [2024-04-15 16:14:07.383438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.546 [2024-04-15 16:14:07.383590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:37.546 [2024-04-15 16:14:07.386983] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.546 [2024-04-15 16:14:07.387192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.546 [2024-04-15 16:14:07.387336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:37.546 [2024-04-15 16:14:07.390778] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.546 [2024-04-15 16:14:07.390963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.546 [2024-04-15 16:14:07.391117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.546 [2024-04-15 16:14:07.394550] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.546 [2024-04-15 16:14:07.394794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.546 [2024-04-15 16:14:07.394945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:37.546 [2024-04-15 16:14:07.398463] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.546 [2024-04-15 16:14:07.398712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.546 [2024-04-15 16:14:07.398925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:37.546 [2024-04-15 16:14:07.402309] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.547 [2024-04-15 16:14:07.402506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.547 [2024-04-15 16:14:07.402673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:37.547 [2024-04-15 16:14:07.406220] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.547 [2024-04-15 16:14:07.406610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.547 [2024-04-15 16:14:07.406756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.547 [2024-04-15 16:14:07.410292] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.547 [2024-04-15 16:14:07.410530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.547 [2024-04-15 16:14:07.410773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:37.547 [2024-04-15 16:14:07.414325] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.547 [2024-04-15 16:14:07.414537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.547 [2024-04-15 16:14:07.414707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:37.547 [2024-04-15 16:14:07.418258] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.547 [2024-04-15 16:14:07.418470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.547 [2024-04-15 16:14:07.418637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:37.547 [2024-04-15 16:14:07.422275] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.547 [2024-04-15 16:14:07.422634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.547 [2024-04-15 16:14:07.422869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.547 [2024-04-15 16:14:07.426318] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.547 [2024-04-15 16:14:07.426526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.547 [2024-04-15 16:14:07.426793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:37.547 [2024-04-15 16:14:07.430314] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.547 [2024-04-15 16:14:07.430516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.547 [2024-04-15 16:14:07.430690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:37.547 [2024-04-15 16:14:07.434307] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.547 [2024-04-15 16:14:07.434517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.547 [2024-04-15 16:14:07.434719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:37.547 [2024-04-15 16:14:07.438426] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.547 [2024-04-15 16:14:07.438750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.547 [2024-04-15 16:14:07.438891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.547 [2024-04-15 16:14:07.442401] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.547 [2024-04-15 16:14:07.442617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.547 [2024-04-15 16:14:07.442759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:37.547 [2024-04-15 16:14:07.446447] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.547 [2024-04-15 16:14:07.446670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.547 [2024-04-15 16:14:07.446820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:37.547 [2024-04-15 16:14:07.450573] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.547 [2024-04-15 16:14:07.450807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.547 [2024-04-15 16:14:07.450979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:37.547 [2024-04-15 16:14:07.454651] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.547 [2024-04-15 16:14:07.454862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.547 [2024-04-15 16:14:07.455028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.547 [2024-04-15 16:14:07.458772] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.547 [2024-04-15 16:14:07.459090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.547 [2024-04-15 16:14:07.459246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:37.547 [2024-04-15 16:14:07.462599] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.547 [2024-04-15 16:14:07.462821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.547 [2024-04-15 16:14:07.462964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:37.547 [2024-04-15 16:14:07.466333] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.547 [2024-04-15 16:14:07.466571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.547 [2024-04-15 16:14:07.466826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:37.547 [2024-04-15 16:14:07.470126] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.547 [2024-04-15 16:14:07.470367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.547 [2024-04-15 16:14:07.470635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.547 [2024-04-15 16:14:07.473874] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.547 [2024-04-15 16:14:07.474203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.547 [2024-04-15 16:14:07.474391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:37.547 [2024-04-15 16:14:07.477694] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.547 [2024-04-15 16:14:07.477881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.547 [2024-04-15 16:14:07.478074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:37.547 [2024-04-15 16:14:07.481561] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.547 [2024-04-15 16:14:07.481790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.547 [2024-04-15 16:14:07.481936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:37.547 [2024-04-15 16:14:07.485449] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.547 [2024-04-15 16:14:07.485674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.547 [2024-04-15 16:14:07.485830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.547 [2024-04-15 16:14:07.489317] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.547 [2024-04-15 16:14:07.489617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.547 [2024-04-15 16:14:07.489804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:37.547 [2024-04-15 16:14:07.493181] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.547 [2024-04-15 16:14:07.493365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.547 [2024-04-15 16:14:07.493552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:37.547 [2024-04-15 16:14:07.496973] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.547 [2024-04-15 16:14:07.497237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.547 [2024-04-15 16:14:07.497471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:37.547 [2024-04-15 16:14:07.500788] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.547 [2024-04-15 16:14:07.500975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.547 [2024-04-15 16:14:07.501159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.547 [2024-04-15 16:14:07.504637] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.547 [2024-04-15 16:14:07.504813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.547 [2024-04-15 16:14:07.504946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:37.547 [2024-04-15 16:14:07.508452] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.547 [2024-04-15 16:14:07.508654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.548 [2024-04-15 16:14:07.508819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:37.808 [2024-04-15 16:14:07.512427] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.808 [2024-04-15 16:14:07.512637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.808 [2024-04-15 16:14:07.512791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:37.808 [2024-04-15 16:14:07.516234] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.808 [2024-04-15 16:14:07.516584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.808 [2024-04-15 16:14:07.516808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.808 [2024-04-15 16:14:07.520375] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.808 [2024-04-15 16:14:07.520576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.808 [2024-04-15 16:14:07.520736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:37.808 [2024-04-15 16:14:07.524649] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.808 [2024-04-15 16:14:07.524886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.808 [2024-04-15 16:14:07.525033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:37.808 [2024-04-15 16:14:07.529038] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.808 [2024-04-15 16:14:07.529226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.808 [2024-04-15 16:14:07.529361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:37.808 [2024-04-15 16:14:07.533156] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.808 [2024-04-15 16:14:07.533342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.808 [2024-04-15 16:14:07.533476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.808 [2024-04-15 16:14:07.537388] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.808 [2024-04-15 16:14:07.537604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.808 [2024-04-15 16:14:07.537754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:37.808 [2024-04-15 16:14:07.541546] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.808 [2024-04-15 16:14:07.541789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.808 [2024-04-15 16:14:07.541938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:37.808 [2024-04-15 16:14:07.545934] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.808 [2024-04-15 16:14:07.546149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.808 [2024-04-15 16:14:07.546386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:37.808 [2024-04-15 16:14:07.549969] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.808 [2024-04-15 16:14:07.550295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.808 [2024-04-15 16:14:07.550491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.808 [2024-04-15 16:14:07.554035] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.808 [2024-04-15 16:14:07.554373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.808 [2024-04-15 16:14:07.554557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:37.808 [2024-04-15 16:14:07.558315] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.808 [2024-04-15 16:14:07.558507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.808 [2024-04-15 16:14:07.558674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:37.808 [2024-04-15 16:14:07.562656] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.808 [2024-04-15 16:14:07.562875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.808 [2024-04-15 16:14:07.563015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:37.808 [2024-04-15 16:14:07.566936] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.808 [2024-04-15 16:14:07.567129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.808 [2024-04-15 16:14:07.567294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.808 [2024-04-15 16:14:07.571106] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.808 [2024-04-15 16:14:07.571307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.808 [2024-04-15 16:14:07.571463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:37.808 [2024-04-15 16:14:07.575384] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.808 [2024-04-15 16:14:07.575636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.808 [2024-04-15 16:14:07.575784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:37.808 [2024-04-15 16:14:07.579697] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.808 [2024-04-15 16:14:07.579882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.808 [2024-04-15 16:14:07.580056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:37.808 [2024-04-15 16:14:07.583883] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.808 [2024-04-15 16:14:07.584087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.808 [2024-04-15 16:14:07.584271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.808 [2024-04-15 16:14:07.588156] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.808 [2024-04-15 16:14:07.588338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.808 [2024-04-15 16:14:07.588471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:37.808 [2024-04-15 16:14:07.592546] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.808 [2024-04-15 16:14:07.592770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.808 [2024-04-15 16:14:07.592911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:37.808 [2024-04-15 16:14:07.596656] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.808 [2024-04-15 16:14:07.597006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.808 [2024-04-15 16:14:07.597208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:37.809 [2024-04-15 16:14:07.600749] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.809 [2024-04-15 16:14:07.601117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.809 [2024-04-15 16:14:07.601301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.809 [2024-04-15 16:14:07.605033] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.809 [2024-04-15 16:14:07.605226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.809 [2024-04-15 16:14:07.605412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:37.809 [2024-04-15 16:14:07.609539] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.809 [2024-04-15 16:14:07.609762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.809 [2024-04-15 16:14:07.609915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:37.809 [2024-04-15 16:14:07.614091] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.809 [2024-04-15 16:14:07.614308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.809 [2024-04-15 16:14:07.614464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:37.809 [2024-04-15 16:14:07.618434] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.809 [2024-04-15 16:14:07.618661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.809 [2024-04-15 16:14:07.618905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.809 [2024-04-15 16:14:07.622908] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.809 [2024-04-15 16:14:07.623122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.809 [2024-04-15 16:14:07.623353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:37.809 [2024-04-15 16:14:07.627218] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.809 [2024-04-15 16:14:07.627441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.809 [2024-04-15 16:14:07.627601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:37.809 [2024-04-15 16:14:07.631643] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.809 [2024-04-15 16:14:07.631887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.809 [2024-04-15 16:14:07.632036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:37.809 [2024-04-15 16:14:07.635974] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.809 [2024-04-15 16:14:07.636201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.809 [2024-04-15 16:14:07.636344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.809 [2024-04-15 16:14:07.640141] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.809 [2024-04-15 16:14:07.640345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.809 [2024-04-15 16:14:07.640588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:37.809 [2024-04-15 16:14:07.644448] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.809 [2024-04-15 16:14:07.644680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.809 [2024-04-15 16:14:07.644852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:37.809 [2024-04-15 16:14:07.648456] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.809 [2024-04-15 16:14:07.648833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.809 [2024-04-15 16:14:07.648975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:37.809 [2024-04-15 16:14:07.652519] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.809 [2024-04-15 16:14:07.652897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.809 [2024-04-15 16:14:07.653042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.809 [2024-04-15 16:14:07.656788] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.809 [2024-04-15 16:14:07.656972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.809 [2024-04-15 16:14:07.657106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:37.809 [2024-04-15 16:14:07.661001] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.809 [2024-04-15 16:14:07.661185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.809 [2024-04-15 16:14:07.661320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:37.809 [2024-04-15 16:14:07.665326] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.809 [2024-04-15 16:14:07.665531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.809 [2024-04-15 16:14:07.665698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:37.809 [2024-04-15 16:14:07.669738] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.809 [2024-04-15 16:14:07.669934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.809 [2024-04-15 16:14:07.670160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.809 [2024-04-15 16:14:07.674094] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.809 [2024-04-15 16:14:07.674308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.809 [2024-04-15 16:14:07.674462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:37.809 [2024-04-15 16:14:07.678631] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.809 [2024-04-15 16:14:07.678868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.809 [2024-04-15 16:14:07.679073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:37.809 [2024-04-15 16:14:07.683147] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.809 [2024-04-15 16:14:07.683349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.809 [2024-04-15 16:14:07.683504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:37.809 [2024-04-15 16:14:07.687608] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.809 [2024-04-15 16:14:07.687831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.809 [2024-04-15 16:14:07.687990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.809 [2024-04-15 16:14:07.692166] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.809 [2024-04-15 16:14:07.692362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.809 [2024-04-15 16:14:07.692515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:37.809 [2024-04-15 16:14:07.696778] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.809 [2024-04-15 16:14:07.697011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.809 [2024-04-15 16:14:07.697209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:37.809 [2024-04-15 16:14:07.701405] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.809 [2024-04-15 16:14:07.701630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.809 [2024-04-15 16:14:07.701798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:37.809 [2024-04-15 16:14:07.705875] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.809 [2024-04-15 16:14:07.706108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.809 [2024-04-15 16:14:07.706368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.809 [2024-04-15 16:14:07.710169] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.809 [2024-04-15 16:14:07.710574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.810 [2024-04-15 16:14:07.710762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:37.810 [2024-04-15 16:14:07.714225] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.810 [2024-04-15 16:14:07.714426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.810 [2024-04-15 16:14:07.714615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:37.810 [2024-04-15 16:14:07.718378] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.810 [2024-04-15 16:14:07.718641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.810 [2024-04-15 16:14:07.718853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:37.810 [2024-04-15 16:14:07.722425] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.810 [2024-04-15 16:14:07.722638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.810 [2024-04-15 16:14:07.722911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.810 [2024-04-15 16:14:07.726458] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.810 [2024-04-15 16:14:07.726707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.810 [2024-04-15 16:14:07.726917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:37.810 [2024-04-15 16:14:07.730489] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.810 [2024-04-15 16:14:07.730737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.810 [2024-04-15 16:14:07.730928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:37.810 [2024-04-15 16:14:07.734457] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.810 [2024-04-15 16:14:07.734711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.810 [2024-04-15 16:14:07.734900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:37.810 [2024-04-15 16:14:07.738412] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.810 [2024-04-15 16:14:07.738769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.810 [2024-04-15 16:14:07.738917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.810 [2024-04-15 16:14:07.742405] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.810 [2024-04-15 16:14:07.742633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.810 [2024-04-15 16:14:07.742876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:37.810 [2024-04-15 16:14:07.746369] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.810 [2024-04-15 16:14:07.746641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.810 [2024-04-15 16:14:07.746883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:37.810 [2024-04-15 16:14:07.750379] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.810 [2024-04-15 16:14:07.750634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.810 [2024-04-15 16:14:07.750849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:37.810 [2024-04-15 16:14:07.754883] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.810 [2024-04-15 16:14:07.755103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.810 [2024-04-15 16:14:07.755256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.810 [2024-04-15 16:14:07.759413] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.810 [2024-04-15 16:14:07.759640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.810 [2024-04-15 16:14:07.759792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:37.810 [2024-04-15 16:14:07.764090] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.810 [2024-04-15 16:14:07.764365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.810 [2024-04-15 16:14:07.764617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:37.810 [2024-04-15 16:14:07.768392] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.810 [2024-04-15 16:14:07.768782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.810 [2024-04-15 16:14:07.769049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:37.810 [2024-04-15 16:14:07.772790] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:37.810 [2024-04-15 16:14:07.773026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.810 [2024-04-15 16:14:07.773230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.071 [2024-04-15 16:14:07.777302] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.071 [2024-04-15 16:14:07.777690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.071 [2024-04-15 16:14:07.777942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:38.071 [2024-04-15 16:14:07.781666] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.071 [2024-04-15 16:14:07.781872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.071 [2024-04-15 16:14:07.782022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:38.071 [2024-04-15 16:14:07.785704] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.071 [2024-04-15 16:14:07.785898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.071 [2024-04-15 16:14:07.786156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:38.071 [2024-04-15 16:14:07.789757] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.071 [2024-04-15 16:14:07.790109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.071 [2024-04-15 16:14:07.790318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.071 [2024-04-15 16:14:07.793829] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.071 [2024-04-15 16:14:07.794043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.071 [2024-04-15 16:14:07.794266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:38.071 [2024-04-15 16:14:07.797993] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.071 [2024-04-15 16:14:07.798187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.071 [2024-04-15 16:14:07.798338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:38.072 [2024-04-15 16:14:07.801964] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.072 [2024-04-15 16:14:07.802199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.072 [2024-04-15 16:14:07.802340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:38.072 [2024-04-15 16:14:07.805978] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.072 [2024-04-15 16:14:07.806327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.072 [2024-04-15 16:14:07.806500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.072 [2024-04-15 16:14:07.810137] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.072 [2024-04-15 16:14:07.810345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.072 [2024-04-15 16:14:07.810541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:38.072 [2024-04-15 16:14:07.814308] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.072 [2024-04-15 16:14:07.814514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.072 [2024-04-15 16:14:07.814747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:38.072 [2024-04-15 16:14:07.818402] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.072 [2024-04-15 16:14:07.818727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.072 [2024-04-15 16:14:07.818940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:38.072 [2024-04-15 16:14:07.822482] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.072 [2024-04-15 16:14:07.822693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.072 [2024-04-15 16:14:07.822878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.072 [2024-04-15 16:14:07.826515] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.072 [2024-04-15 16:14:07.826734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.072 [2024-04-15 16:14:07.826960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:38.072 [2024-04-15 16:14:07.830674] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.072 [2024-04-15 16:14:07.830898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.072 [2024-04-15 16:14:07.831123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:38.072 [2024-04-15 16:14:07.834751] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.072 [2024-04-15 16:14:07.835125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.072 [2024-04-15 16:14:07.835293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:38.072 [2024-04-15 16:14:07.839133] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.072 [2024-04-15 16:14:07.839355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.072 [2024-04-15 16:14:07.839591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.072 [2024-04-15 16:14:07.843767] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.072 [2024-04-15 16:14:07.843967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.072 [2024-04-15 16:14:07.844198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:38.072 [2024-04-15 16:14:07.848230] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.072 [2024-04-15 16:14:07.848426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.072 [2024-04-15 16:14:07.848697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:38.072 [2024-04-15 16:14:07.852815] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.072 [2024-04-15 16:14:07.853001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.072 [2024-04-15 16:14:07.853269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:38.072 [2024-04-15 16:14:07.857238] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.072 [2024-04-15 16:14:07.857441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.072 [2024-04-15 16:14:07.857621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.072 [2024-04-15 16:14:07.861784] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.072 [2024-04-15 16:14:07.861995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.072 [2024-04-15 16:14:07.862238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:38.072 [2024-04-15 16:14:07.866188] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.072 [2024-04-15 16:14:07.866379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.072 [2024-04-15 16:14:07.866524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:38.072 [2024-04-15 16:14:07.870633] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.072 [2024-04-15 16:14:07.870865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.072 [2024-04-15 16:14:07.871020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:38.072 [2024-04-15 16:14:07.875090] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.072 [2024-04-15 16:14:07.875296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.072 [2024-04-15 16:14:07.875452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.072 [2024-04-15 16:14:07.879693] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.072 [2024-04-15 16:14:07.879916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.072 [2024-04-15 16:14:07.880059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:38.072 [2024-04-15 16:14:07.884180] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.072 [2024-04-15 16:14:07.884410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.072 [2024-04-15 16:14:07.884604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:38.072 [2024-04-15 16:14:07.888428] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.072 [2024-04-15 16:14:07.888648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.072 [2024-04-15 16:14:07.888788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:38.072 [2024-04-15 16:14:07.893076] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.072 [2024-04-15 16:14:07.893537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.072 [2024-04-15 16:14:07.893816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.072 [2024-04-15 16:14:07.897884] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.072 [2024-04-15 16:14:07.898106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.072 [2024-04-15 16:14:07.898258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:38.072 [2024-04-15 16:14:07.902819] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.072 [2024-04-15 16:14:07.903034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.072 [2024-04-15 16:14:07.903215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:38.072 [2024-04-15 16:14:07.907334] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.072 [2024-04-15 16:14:07.907523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.072 [2024-04-15 16:14:07.907757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:38.072 [2024-04-15 16:14:07.911827] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.072 [2024-04-15 16:14:07.912020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.072 [2024-04-15 16:14:07.912171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.072 [2024-04-15 16:14:07.916333] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.072 [2024-04-15 16:14:07.916525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.072 [2024-04-15 16:14:07.916728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:38.072 [2024-04-15 16:14:07.920898] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.073 [2024-04-15 16:14:07.921116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.073 [2024-04-15 16:14:07.921265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:38.073 [2024-04-15 16:14:07.925436] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.073 [2024-04-15 16:14:07.925685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.073 [2024-04-15 16:14:07.925845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:38.073 [2024-04-15 16:14:07.929857] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.073 [2024-04-15 16:14:07.930112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.073 [2024-04-15 16:14:07.930257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.073 [2024-04-15 16:14:07.934333] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.073 [2024-04-15 16:14:07.934527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.073 [2024-04-15 16:14:07.934689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:38.073 [2024-04-15 16:14:07.938703] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.073 [2024-04-15 16:14:07.938902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.073 [2024-04-15 16:14:07.939049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:38.073 [2024-04-15 16:14:07.943116] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.073 [2024-04-15 16:14:07.943301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.073 [2024-04-15 16:14:07.943484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:38.073 [2024-04-15 16:14:07.947467] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.073 [2024-04-15 16:14:07.947703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.073 [2024-04-15 16:14:07.947855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.073 [2024-04-15 16:14:07.951814] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.073 [2024-04-15 16:14:07.952022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.073 [2024-04-15 16:14:07.952207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:38.073 [2024-04-15 16:14:07.955898] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.073 [2024-04-15 16:14:07.956277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.073 [2024-04-15 16:14:07.956468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:38.073 [2024-04-15 16:14:07.960019] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.073 [2024-04-15 16:14:07.960387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.073 [2024-04-15 16:14:07.960541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:38.073 [2024-04-15 16:14:07.964312] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.073 [2024-04-15 16:14:07.964515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.073 [2024-04-15 16:14:07.964704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.073 [2024-04-15 16:14:07.968735] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.073 [2024-04-15 16:14:07.968936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.073 [2024-04-15 16:14:07.969141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:38.073 [2024-04-15 16:14:07.973234] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.073 [2024-04-15 16:14:07.973429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.073 [2024-04-15 16:14:07.973590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:38.073 [2024-04-15 16:14:07.977679] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.073 [2024-04-15 16:14:07.977890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.073 [2024-04-15 16:14:07.978032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:38.073 [2024-04-15 16:14:07.981968] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.073 [2024-04-15 16:14:07.982179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.073 [2024-04-15 16:14:07.982329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.073 [2024-04-15 16:14:07.986309] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.073 [2024-04-15 16:14:07.986521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.073 [2024-04-15 16:14:07.986722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:38.073 [2024-04-15 16:14:07.990580] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.073 [2024-04-15 16:14:07.990803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.073 [2024-04-15 16:14:07.990942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:38.073 [2024-04-15 16:14:07.995027] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.073 [2024-04-15 16:14:07.995240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.073 [2024-04-15 16:14:07.995394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:38.073 [2024-04-15 16:14:07.999517] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.073 [2024-04-15 16:14:07.999748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.073 [2024-04-15 16:14:07.999892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.073 [2024-04-15 16:14:08.004127] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.073 [2024-04-15 16:14:08.004333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.073 [2024-04-15 16:14:08.004476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:38.073 [2024-04-15 16:14:08.008204] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.073 [2024-04-15 16:14:08.008566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.073 [2024-04-15 16:14:08.008727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:38.073 [2024-04-15 16:14:08.012281] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.073 [2024-04-15 16:14:08.012622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.073 [2024-04-15 16:14:08.012758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:38.073 [2024-04-15 16:14:08.016535] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.073 [2024-04-15 16:14:08.016757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.073 [2024-04-15 16:14:08.016915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.073 [2024-04-15 16:14:08.020911] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.073 [2024-04-15 16:14:08.021106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.073 [2024-04-15 16:14:08.021259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:38.073 [2024-04-15 16:14:08.025399] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.073 [2024-04-15 16:14:08.025622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.073 [2024-04-15 16:14:08.025792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:38.073 [2024-04-15 16:14:08.029882] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.073 [2024-04-15 16:14:08.030088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.073 [2024-04-15 16:14:08.030252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:38.073 [2024-04-15 16:14:08.034428] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.073 [2024-04-15 16:14:08.034650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.073 [2024-04-15 16:14:08.034810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.334 [2024-04-15 16:14:08.038991] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.334 [2024-04-15 16:14:08.039203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.334 [2024-04-15 16:14:08.039362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:38.334 [2024-04-15 16:14:08.043487] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.334 [2024-04-15 16:14:08.043693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.334 [2024-04-15 16:14:08.043836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:38.334 [2024-04-15 16:14:08.048036] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.334 [2024-04-15 16:14:08.048249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.334 [2024-04-15 16:14:08.048399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:38.334 [2024-04-15 16:14:08.052450] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.334 [2024-04-15 16:14:08.053349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.334 [2024-04-15 16:14:08.053535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.334 [2024-04-15 16:14:08.057746] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.334 [2024-04-15 16:14:08.057956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.334 [2024-04-15 16:14:08.058118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:38.334 [2024-04-15 16:14:08.062299] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.334 [2024-04-15 16:14:08.062538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.334 [2024-04-15 16:14:08.062722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:38.334 [2024-04-15 16:14:08.066666] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.334 [2024-04-15 16:14:08.067076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.334 [2024-04-15 16:14:08.067322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:38.334 [2024-04-15 16:14:08.070895] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.334 [2024-04-15 16:14:08.071245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.334 [2024-04-15 16:14:08.071393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.334 [2024-04-15 16:14:08.075173] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.334 [2024-04-15 16:14:08.075356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.334 [2024-04-15 16:14:08.075562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:38.334 [2024-04-15 16:14:08.079593] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.334 [2024-04-15 16:14:08.079793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.334 [2024-04-15 16:14:08.079938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:38.334 [2024-04-15 16:14:08.084114] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.334 [2024-04-15 16:14:08.084306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.334 [2024-04-15 16:14:08.084449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:38.334 [2024-04-15 16:14:08.088479] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.334 [2024-04-15 16:14:08.088692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.334 [2024-04-15 16:14:08.088840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.334 [2024-04-15 16:14:08.092817] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.334 [2024-04-15 16:14:08.092999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.334 [2024-04-15 16:14:08.093154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:38.334 [2024-04-15 16:14:08.097328] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.334 [2024-04-15 16:14:08.097528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.334 [2024-04-15 16:14:08.097728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:38.334 [2024-04-15 16:14:08.101838] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.334 [2024-04-15 16:14:08.102036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.334 [2024-04-15 16:14:08.102178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:38.334 [2024-04-15 16:14:08.106259] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.334 [2024-04-15 16:14:08.106491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.334 [2024-04-15 16:14:08.106664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.334 [2024-04-15 16:14:08.110657] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.334 [2024-04-15 16:14:08.110871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.335 [2024-04-15 16:14:08.111005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:38.335 [2024-04-15 16:14:08.114941] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.335 [2024-04-15 16:14:08.115122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.335 [2024-04-15 16:14:08.115264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:38.335 [2024-04-15 16:14:08.119228] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.335 [2024-04-15 16:14:08.119443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.335 [2024-04-15 16:14:08.119604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:38.335 [2024-04-15 16:14:08.123311] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.335 [2024-04-15 16:14:08.123693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.335 [2024-04-15 16:14:08.124036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.335 [2024-04-15 16:14:08.127335] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.335 [2024-04-15 16:14:08.127527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.335 [2024-04-15 16:14:08.127701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:38.335 [2024-04-15 16:14:08.131667] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.335 [2024-04-15 16:14:08.131862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.335 [2024-04-15 16:14:08.131998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:38.335 [2024-04-15 16:14:08.136054] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.335 [2024-04-15 16:14:08.136257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.335 [2024-04-15 16:14:08.136399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:38.335 [2024-04-15 16:14:08.140699] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.335 [2024-04-15 16:14:08.140933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.335 [2024-04-15 16:14:08.141083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.335 [2024-04-15 16:14:08.145354] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.335 [2024-04-15 16:14:08.145609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.335 [2024-04-15 16:14:08.145758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:38.335 [2024-04-15 16:14:08.149893] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.335 [2024-04-15 16:14:08.150135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.335 [2024-04-15 16:14:08.150290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:38.335 [2024-04-15 16:14:08.154370] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.335 [2024-04-15 16:14:08.154620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.335 [2024-04-15 16:14:08.154996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:38.335 [2024-04-15 16:14:08.159117] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.335 [2024-04-15 16:14:08.159334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.335 [2024-04-15 16:14:08.159482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.335 [2024-04-15 16:14:08.163608] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.335 [2024-04-15 16:14:08.163816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.335 [2024-04-15 16:14:08.163969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:38.335 [2024-04-15 16:14:08.168080] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.335 [2024-04-15 16:14:08.168278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.335 [2024-04-15 16:14:08.168447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:38.335 [2024-04-15 16:14:08.172552] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.335 [2024-04-15 16:14:08.172842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.335 [2024-04-15 16:14:08.172985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:38.335 [2024-04-15 16:14:08.176730] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.335 [2024-04-15 16:14:08.176983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.335 [2024-04-15 16:14:08.177141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.335 [2024-04-15 16:14:08.181236] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.335 [2024-04-15 16:14:08.181711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.335 [2024-04-15 16:14:08.181894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:38.335 [2024-04-15 16:14:08.185775] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.335 [2024-04-15 16:14:08.185986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.335 [2024-04-15 16:14:08.186128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:38.335 [2024-04-15 16:14:08.190185] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.335 [2024-04-15 16:14:08.190380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.335 [2024-04-15 16:14:08.190523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:38.335 [2024-04-15 16:14:08.194438] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.335 [2024-04-15 16:14:08.194648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.335 [2024-04-15 16:14:08.194790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.335 [2024-04-15 16:14:08.198927] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.335 [2024-04-15 16:14:08.199134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.335 [2024-04-15 16:14:08.199290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:38.335 [2024-04-15 16:14:08.203295] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.335 [2024-04-15 16:14:08.203495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.335 [2024-04-15 16:14:08.203655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:38.335 [2024-04-15 16:14:08.207775] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.335 [2024-04-15 16:14:08.207959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.335 [2024-04-15 16:14:08.208093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:38.335 [2024-04-15 16:14:08.212133] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.335 [2024-04-15 16:14:08.212335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.335 [2024-04-15 16:14:08.212469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.335 [2024-04-15 16:14:08.216388] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.335 [2024-04-15 16:14:08.216608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.335 [2024-04-15 16:14:08.216745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:38.335 [2024-04-15 16:14:08.220749] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.335 [2024-04-15 16:14:08.220973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.335 [2024-04-15 16:14:08.221127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:38.336 [2024-04-15 16:14:08.225098] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.336 [2024-04-15 16:14:08.225313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.336 [2024-04-15 16:14:08.225464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:38.336 [2024-04-15 16:14:08.229182] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.336 [2024-04-15 16:14:08.229565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.336 [2024-04-15 16:14:08.229841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.336 [2024-04-15 16:14:08.233292] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.336 [2024-04-15 16:14:08.233690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.336 [2024-04-15 16:14:08.233949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:38.336 [2024-04-15 16:14:08.237678] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.336 [2024-04-15 16:14:08.237947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.336 [2024-04-15 16:14:08.238119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:38.336 [2024-04-15 16:14:08.242007] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.336 [2024-04-15 16:14:08.242226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.336 [2024-04-15 16:14:08.242385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:38.336 [2024-04-15 16:14:08.246235] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.336 [2024-04-15 16:14:08.246448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.336 [2024-04-15 16:14:08.246599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.336 [2024-04-15 16:14:08.250602] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.336 [2024-04-15 16:14:08.250795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.336 [2024-04-15 16:14:08.250935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:38.336 [2024-04-15 16:14:08.255095] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.336 [2024-04-15 16:14:08.255319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.336 [2024-04-15 16:14:08.255459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:38.336 [2024-04-15 16:14:08.259614] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.336 [2024-04-15 16:14:08.259823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.336 [2024-04-15 16:14:08.260007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:38.336 [2024-04-15 16:14:08.264268] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.336 [2024-04-15 16:14:08.264498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.336 [2024-04-15 16:14:08.264750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.336 [2024-04-15 16:14:08.268719] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.336 [2024-04-15 16:14:08.269000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.336 [2024-04-15 16:14:08.269222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:38.336 [2024-04-15 16:14:08.273224] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.336 [2024-04-15 16:14:08.273436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.336 [2024-04-15 16:14:08.273603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:38.336 [2024-04-15 16:14:08.277654] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.336 [2024-04-15 16:14:08.277847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.336 [2024-04-15 16:14:08.278030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:38.336 [2024-04-15 16:14:08.282200] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.336 [2024-04-15 16:14:08.282415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.336 [2024-04-15 16:14:08.282561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.336 [2024-04-15 16:14:08.287015] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.336 [2024-04-15 16:14:08.287245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.336 [2024-04-15 16:14:08.287427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:38.336 [2024-04-15 16:14:08.291786] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.336 [2024-04-15 16:14:08.292006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.336 [2024-04-15 16:14:08.292148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:38.336 [2024-04-15 16:14:08.296309] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.336 [2024-04-15 16:14:08.296508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.336 [2024-04-15 16:14:08.296667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:38.596 [2024-04-15 16:14:08.300773] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.596 [2024-04-15 16:14:08.300979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.596 [2024-04-15 16:14:08.301127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.596 [2024-04-15 16:14:08.305287] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.596 [2024-04-15 16:14:08.305510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.596 [2024-04-15 16:14:08.305718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:38.596 [2024-04-15 16:14:08.309870] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.596 [2024-04-15 16:14:08.310072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.596 [2024-04-15 16:14:08.310231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:38.596 [2024-04-15 16:14:08.314474] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.596 [2024-04-15 16:14:08.314709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.596 [2024-04-15 16:14:08.315017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:38.596 [2024-04-15 16:14:08.319096] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.596 [2024-04-15 16:14:08.319313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.596 [2024-04-15 16:14:08.319458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.596 [2024-04-15 16:14:08.323722] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.596 [2024-04-15 16:14:08.323930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.596 [2024-04-15 16:14:08.324100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:38.596 [2024-04-15 16:14:08.328200] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.596 [2024-04-15 16:14:08.328412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.596 [2024-04-15 16:14:08.328561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:38.596 [2024-04-15 16:14:08.332897] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.596 [2024-04-15 16:14:08.333149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.596 [2024-04-15 16:14:08.333295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:38.596 [2024-04-15 16:14:08.337140] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.596 [2024-04-15 16:14:08.337533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.596 [2024-04-15 16:14:08.337902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.596 [2024-04-15 16:14:08.342018] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.596 [2024-04-15 16:14:08.342462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.596 [2024-04-15 16:14:08.342640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:38.596 [2024-04-15 16:14:08.346969] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.596 [2024-04-15 16:14:08.347201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.596 [2024-04-15 16:14:08.347347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:38.596 [2024-04-15 16:14:08.352252] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.596 [2024-04-15 16:14:08.352485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.596 [2024-04-15 16:14:08.352649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:38.596 [2024-04-15 16:14:08.357556] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.596 [2024-04-15 16:14:08.357852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.596 [2024-04-15 16:14:08.358005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.596 [2024-04-15 16:14:08.362931] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.596 [2024-04-15 16:14:08.363164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.596 [2024-04-15 16:14:08.363310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:38.596 [2024-04-15 16:14:08.368146] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.596 [2024-04-15 16:14:08.368368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.596 [2024-04-15 16:14:08.368512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:38.596 [2024-04-15 16:14:08.373405] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.596 [2024-04-15 16:14:08.373691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.597 [2024-04-15 16:14:08.373848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:38.597 [2024-04-15 16:14:08.378657] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.597 [2024-04-15 16:14:08.378876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.597 [2024-04-15 16:14:08.379019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.597 [2024-04-15 16:14:08.383827] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.597 [2024-04-15 16:14:08.384090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.597 [2024-04-15 16:14:08.384234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:38.597 [2024-04-15 16:14:08.389056] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.597 [2024-04-15 16:14:08.389279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.597 [2024-04-15 16:14:08.389435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:38.597 [2024-04-15 16:14:08.393987] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.597 [2024-04-15 16:14:08.394210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.597 [2024-04-15 16:14:08.394359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:38.597 [2024-04-15 16:14:08.399222] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.597 [2024-04-15 16:14:08.399448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.597 [2024-04-15 16:14:08.399614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.597 [2024-04-15 16:14:08.404285] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.597 [2024-04-15 16:14:08.404514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.597 [2024-04-15 16:14:08.404677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:38.597 [2024-04-15 16:14:08.409460] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.597 [2024-04-15 16:14:08.409720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.597 [2024-04-15 16:14:08.409898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:38.597 [2024-04-15 16:14:08.414724] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.597 [2024-04-15 16:14:08.414946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.597 [2024-04-15 16:14:08.415087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:38.597 [2024-04-15 16:14:08.420212] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.597 [2024-04-15 16:14:08.420472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.597 [2024-04-15 16:14:08.420625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.597 [2024-04-15 16:14:08.425618] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.597 [2024-04-15 16:14:08.425857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.597 [2024-04-15 16:14:08.426012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:38.597 [2024-04-15 16:14:08.430996] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.597 [2024-04-15 16:14:08.431230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.597 [2024-04-15 16:14:08.431372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:38.597 [2024-04-15 16:14:08.436394] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.597 [2024-04-15 16:14:08.436658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.597 [2024-04-15 16:14:08.436796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:38.597 [2024-04-15 16:14:08.441448] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.597 [2024-04-15 16:14:08.441698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.597 [2024-04-15 16:14:08.441862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.597 [2024-04-15 16:14:08.446743] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.597 [2024-04-15 16:14:08.446982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.597 [2024-04-15 16:14:08.447128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:38.597 [2024-04-15 16:14:08.451958] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.597 [2024-04-15 16:14:08.452181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.597 [2024-04-15 16:14:08.452321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:38.597 [2024-04-15 16:14:08.457223] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.597 [2024-04-15 16:14:08.457463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.597 [2024-04-15 16:14:08.457655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:38.597 [2024-04-15 16:14:08.462484] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.597 [2024-04-15 16:14:08.462727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.597 [2024-04-15 16:14:08.462873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.597 [2024-04-15 16:14:08.467646] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.597 [2024-04-15 16:14:08.467895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.597 [2024-04-15 16:14:08.468061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:38.597 [2024-04-15 16:14:08.472668] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.597 [2024-04-15 16:14:08.472867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.597 [2024-04-15 16:14:08.473016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:38.597 [2024-04-15 16:14:08.477397] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.597 [2024-04-15 16:14:08.477639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.597 [2024-04-15 16:14:08.477776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:38.597 [2024-04-15 16:14:08.482273] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.597 [2024-04-15 16:14:08.482501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.597 [2024-04-15 16:14:08.482763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.597 [2024-04-15 16:14:08.487161] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.597 [2024-04-15 16:14:08.487392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.597 [2024-04-15 16:14:08.487539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:38.597 [2024-04-15 16:14:08.492439] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.597 [2024-04-15 16:14:08.492683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.597 [2024-04-15 16:14:08.492836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:38.597 [2024-04-15 16:14:08.497644] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.597 [2024-04-15 16:14:08.497868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.597 [2024-04-15 16:14:08.498023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:38.597 [2024-04-15 16:14:08.502507] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.597 [2024-04-15 16:14:08.502770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.597 [2024-04-15 16:14:08.503109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.597 [2024-04-15 16:14:08.507496] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.597 [2024-04-15 16:14:08.507764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.597 [2024-04-15 16:14:08.507901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:38.597 [2024-04-15 16:14:08.512437] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.597 [2024-04-15 16:14:08.512676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.598 [2024-04-15 16:14:08.512813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:38.598 [2024-04-15 16:14:08.517417] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.598 [2024-04-15 16:14:08.517684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.598 [2024-04-15 16:14:08.517832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:38.598 [2024-04-15 16:14:08.522482] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.598 [2024-04-15 16:14:08.522783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.598 [2024-04-15 16:14:08.522933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.598 [2024-04-15 16:14:08.527488] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.598 [2024-04-15 16:14:08.527756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.598 [2024-04-15 16:14:08.527891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:38.598 [2024-04-15 16:14:08.532322] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.598 [2024-04-15 16:14:08.532561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.598 [2024-04-15 16:14:08.532730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:38.598 [2024-04-15 16:14:08.537519] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.598 [2024-04-15 16:14:08.537834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.598 [2024-04-15 16:14:08.537984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:38.598 [2024-04-15 16:14:08.542553] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.598 [2024-04-15 16:14:08.542875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.598 [2024-04-15 16:14:08.543188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.598 [2024-04-15 16:14:08.547615] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.598 [2024-04-15 16:14:08.547853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.598 [2024-04-15 16:14:08.547983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:38.598 [2024-04-15 16:14:08.552344] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.598 [2024-04-15 16:14:08.552579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.598 [2024-04-15 16:14:08.552860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:38.598 [2024-04-15 16:14:08.557260] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.598 [2024-04-15 16:14:08.557481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.598 [2024-04-15 16:14:08.557716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:38.857 [2024-04-15 16:14:08.562306] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.858 [2024-04-15 16:14:08.562561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.858 [2024-04-15 16:14:08.562917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.858 [2024-04-15 16:14:08.567387] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.858 [2024-04-15 16:14:08.567605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.858 [2024-04-15 16:14:08.567765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:38.858 [2024-04-15 16:14:08.572483] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.858 [2024-04-15 16:14:08.572751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.858 [2024-04-15 16:14:08.572912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:38.858 [2024-04-15 16:14:08.577736] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.858 [2024-04-15 16:14:08.577973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.858 [2024-04-15 16:14:08.578115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:38.858 [2024-04-15 16:14:08.582770] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.858 [2024-04-15 16:14:08.583005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.858 [2024-04-15 16:14:08.583151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.858 [2024-04-15 16:14:08.587819] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.858 [2024-04-15 16:14:08.588072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.858 [2024-04-15 16:14:08.588210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:38.858 [2024-04-15 16:14:08.592343] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.858 [2024-04-15 16:14:08.592734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.858 [2024-04-15 16:14:08.592932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:38.858 [2024-04-15 16:14:08.596673] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.858 [2024-04-15 16:14:08.597060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.858 [2024-04-15 16:14:08.597268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:38.858 [2024-04-15 16:14:08.601594] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.858 [2024-04-15 16:14:08.601830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.858 [2024-04-15 16:14:08.602016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.858 [2024-04-15 16:14:08.606763] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.858 [2024-04-15 16:14:08.607008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.858 [2024-04-15 16:14:08.607161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:38.858 [2024-04-15 16:14:08.611887] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.858 [2024-04-15 16:14:08.612089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.858 [2024-04-15 16:14:08.612229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:38.858 [2024-04-15 16:14:08.616810] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.858 [2024-04-15 16:14:08.617035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.858 [2024-04-15 16:14:08.617199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:38.858 [2024-04-15 16:14:08.621810] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.858 [2024-04-15 16:14:08.622066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.858 [2024-04-15 16:14:08.622408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.858 [2024-04-15 16:14:08.627063] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.858 [2024-04-15 16:14:08.627365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.858 [2024-04-15 16:14:08.627526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:38.858 [2024-04-15 16:14:08.632433] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.858 [2024-04-15 16:14:08.632694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.858 [2024-04-15 16:14:08.632875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:38.858 [2024-04-15 16:14:08.637629] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.858 [2024-04-15 16:14:08.637883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.858 [2024-04-15 16:14:08.638246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:38.858 [2024-04-15 16:14:08.642701] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.858 [2024-04-15 16:14:08.642916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.858 [2024-04-15 16:14:08.643147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.858 [2024-04-15 16:14:08.647767] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.858 [2024-04-15 16:14:08.647972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.858 [2024-04-15 16:14:08.648119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:38.858 [2024-04-15 16:14:08.652788] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.858 [2024-04-15 16:14:08.652994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.858 [2024-04-15 16:14:08.653125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:38.858 [2024-04-15 16:14:08.657696] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.858 [2024-04-15 16:14:08.657918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.858 [2024-04-15 16:14:08.658070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:38.858 [2024-04-15 16:14:08.662616] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.858 [2024-04-15 16:14:08.662853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.858 [2024-04-15 16:14:08.662999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.858 [2024-04-15 16:14:08.667403] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.858 [2024-04-15 16:14:08.667625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.858 [2024-04-15 16:14:08.667770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:38.858 [2024-04-15 16:14:08.672281] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.858 [2024-04-15 16:14:08.672495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.858 [2024-04-15 16:14:08.672678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:38.858 [2024-04-15 16:14:08.677106] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.858 [2024-04-15 16:14:08.677335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.858 [2024-04-15 16:14:08.677524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:38.858 [2024-04-15 16:14:08.681975] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.858 [2024-04-15 16:14:08.682235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.858 [2024-04-15 16:14:08.682379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.858 [2024-04-15 16:14:08.686775] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.858 [2024-04-15 16:14:08.687026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.858 [2024-04-15 16:14:08.687167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:38.858 [2024-04-15 16:14:08.691554] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.858 [2024-04-15 16:14:08.691812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.858 [2024-04-15 16:14:08.691953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:38.858 [2024-04-15 16:14:08.696284] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.859 [2024-04-15 16:14:08.696505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.859 [2024-04-15 16:14:08.696668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:38.859 [2024-04-15 16:14:08.700609] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.859 [2024-04-15 16:14:08.701004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.859 [2024-04-15 16:14:08.701382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.859 [2024-04-15 16:14:08.705231] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.859 [2024-04-15 16:14:08.705684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.859 [2024-04-15 16:14:08.705935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:38.859 [2024-04-15 16:14:08.709898] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.859 [2024-04-15 16:14:08.710120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.859 [2024-04-15 16:14:08.710264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:38.859 [2024-04-15 16:14:08.714735] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.859 [2024-04-15 16:14:08.714952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.859 [2024-04-15 16:14:08.715100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:38.859 [2024-04-15 16:14:08.719626] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.859 [2024-04-15 16:14:08.719844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.859 [2024-04-15 16:14:08.719991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.859 [2024-04-15 16:14:08.724645] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.859 [2024-04-15 16:14:08.724847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.859 [2024-04-15 16:14:08.724995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:38.859 [2024-04-15 16:14:08.729789] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.859 [2024-04-15 16:14:08.730026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.859 [2024-04-15 16:14:08.730166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:38.859 [2024-04-15 16:14:08.735039] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.859 [2024-04-15 16:14:08.735261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.859 [2024-04-15 16:14:08.735405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:38.859 [2024-04-15 16:14:08.740190] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.859 [2024-04-15 16:14:08.740432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.859 [2024-04-15 16:14:08.740590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.859 [2024-04-15 16:14:08.745289] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.859 [2024-04-15 16:14:08.745525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.859 [2024-04-15 16:14:08.745784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:38.859 [2024-04-15 16:14:08.750365] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.859 [2024-04-15 16:14:08.750638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.859 [2024-04-15 16:14:08.750933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:38.859 [2024-04-15 16:14:08.755930] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.859 [2024-04-15 16:14:08.756168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.859 [2024-04-15 16:14:08.756385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:38.859 [2024-04-15 16:14:08.761109] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.859 [2024-04-15 16:14:08.761330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.859 [2024-04-15 16:14:08.761525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.859 [2024-04-15 16:14:08.766366] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.859 [2024-04-15 16:14:08.766615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.859 [2024-04-15 16:14:08.766947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:38.859 [2024-04-15 16:14:08.771605] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.859 [2024-04-15 16:14:08.771825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.859 [2024-04-15 16:14:08.772014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:38.859 [2024-04-15 16:14:08.776779] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.859 [2024-04-15 16:14:08.777016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.859 [2024-04-15 16:14:08.777184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:38.859 [2024-04-15 16:14:08.781916] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.859 [2024-04-15 16:14:08.782158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.859 [2024-04-15 16:14:08.782312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.859 [2024-04-15 16:14:08.786892] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.859 [2024-04-15 16:14:08.787117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.859 [2024-04-15 16:14:08.787273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:38.859 [2024-04-15 16:14:08.791862] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.859 [2024-04-15 16:14:08.792097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.859 [2024-04-15 16:14:08.792249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:38.859 [2024-04-15 16:14:08.797210] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.859 [2024-04-15 16:14:08.797468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.859 [2024-04-15 16:14:08.797637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:38.859 [2024-04-15 16:14:08.802693] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.859 [2024-04-15 16:14:08.802930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.859 [2024-04-15 16:14:08.803084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.859 [2024-04-15 16:14:08.807834] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.859 [2024-04-15 16:14:08.808037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.859 [2024-04-15 16:14:08.808207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:38.859 [2024-04-15 16:14:08.812815] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.859 [2024-04-15 16:14:08.813061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.859 [2024-04-15 16:14:08.813243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:38.859 [2024-04-15 16:14:08.818195] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:38.859 [2024-04-15 16:14:08.818455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.859 [2024-04-15 16:14:08.818638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:39.119 [2024-04-15 16:14:08.823720] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:39.119 [2024-04-15 16:14:08.823967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.119 [2024-04-15 16:14:08.824139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.119 [2024-04-15 16:14:08.829091] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:39.119 [2024-04-15 16:14:08.829348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.119 [2024-04-15 16:14:08.829660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:39.119 [2024-04-15 16:14:08.834173] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c67f40) with pdu=0x2000190fef90 00:21:39.119 [2024-04-15 16:14:08.834396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.119 [2024-04-15 16:14:08.834669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:39.119 00:21:39.119 Latency(us) 00:21:39.119 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:39.119 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:21:39.119 nvme0n1 : 2.00 6913.12 864.14 0.00 0.00 2310.14 1388.74 6054.28 00:21:39.119 =================================================================================================================== 00:21:39.119 Total : 6913.12 864.14 0.00 0.00 2310.14 1388.74 6054.28 00:21:39.119 0 00:21:39.119 16:14:08 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:39.119 16:14:08 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:39.119 16:14:08 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:39.119 | .driver_specific 00:21:39.119 | .nvme_error 00:21:39.119 | .status_code 00:21:39.119 | .command_transient_transport_error' 00:21:39.119 16:14:08 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:39.377 16:14:09 -- host/digest.sh@71 -- # (( 446 > 0 )) 00:21:39.377 16:14:09 -- host/digest.sh@73 -- # killprocess 91689 00:21:39.377 16:14:09 -- common/autotest_common.sh@936 -- # '[' -z 91689 ']' 00:21:39.377 16:14:09 -- common/autotest_common.sh@940 -- # kill -0 91689 00:21:39.377 16:14:09 -- common/autotest_common.sh@941 -- # uname 00:21:39.377 16:14:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:39.377 16:14:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 91689 00:21:39.377 killing process with pid 91689 00:21:39.377 Received shutdown signal, test time was about 2.000000 seconds 00:21:39.377 00:21:39.377 Latency(us) 00:21:39.377 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:39.377 =================================================================================================================== 00:21:39.377 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:39.377 16:14:09 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:39.377 16:14:09 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:39.377 16:14:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 91689' 00:21:39.377 16:14:09 -- common/autotest_common.sh@955 -- # kill 91689 00:21:39.377 16:14:09 -- common/autotest_common.sh@960 -- # wait 91689 00:21:39.635 16:14:09 -- host/digest.sh@116 -- # killprocess 91497 00:21:39.635 16:14:09 -- common/autotest_common.sh@936 -- # '[' -z 91497 ']' 00:21:39.635 16:14:09 -- common/autotest_common.sh@940 -- # kill -0 91497 00:21:39.635 16:14:09 -- common/autotest_common.sh@941 -- # uname 00:21:39.635 16:14:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:39.635 16:14:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 91497 00:21:39.635 16:14:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:39.635 16:14:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:39.635 16:14:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 91497' 00:21:39.635 killing process with pid 91497 00:21:39.635 16:14:09 -- common/autotest_common.sh@955 -- # kill 91497 00:21:39.635 16:14:09 -- common/autotest_common.sh@960 -- # wait 91497 00:21:39.635 00:21:39.635 real 0m17.126s 00:21:39.635 user 0m32.049s 00:21:39.635 sys 0m5.223s 00:21:39.635 16:14:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:39.635 16:14:09 -- common/autotest_common.sh@10 -- # set +x 00:21:39.635 ************************************ 00:21:39.635 END TEST nvmf_digest_error 00:21:39.635 ************************************ 00:21:39.894 16:14:09 -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:21:39.894 16:14:09 -- host/digest.sh@150 -- # nvmftestfini 00:21:39.894 16:14:09 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:39.894 16:14:09 -- nvmf/common.sh@117 -- # sync 00:21:39.894 16:14:09 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:39.894 16:14:09 -- nvmf/common.sh@120 -- # set +e 00:21:39.894 16:14:09 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:39.894 16:14:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:39.894 rmmod nvme_tcp 00:21:39.894 rmmod nvme_fabrics 00:21:39.894 16:14:09 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:39.894 16:14:09 -- nvmf/common.sh@124 -- # set -e 00:21:39.894 16:14:09 -- nvmf/common.sh@125 -- # return 0 00:21:39.894 16:14:09 -- nvmf/common.sh@478 -- # '[' -n 91497 ']' 00:21:39.894 16:14:09 -- nvmf/common.sh@479 -- # killprocess 91497 00:21:39.894 16:14:09 -- common/autotest_common.sh@936 -- # '[' -z 91497 ']' 00:21:39.894 16:14:09 -- common/autotest_common.sh@940 -- # kill -0 91497 00:21:39.894 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (91497) - No such process 00:21:39.894 16:14:09 -- common/autotest_common.sh@963 -- # echo 'Process with pid 91497 is not found' 00:21:39.894 Process with pid 91497 is not found 00:21:39.894 16:14:09 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:39.894 16:14:09 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:39.894 16:14:09 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:39.894 16:14:09 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:39.894 16:14:09 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:39.894 16:14:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:39.894 16:14:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:39.894 16:14:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:39.894 16:14:09 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:39.894 00:21:39.894 real 0m36.095s 00:21:39.894 user 1m5.899s 00:21:39.894 sys 0m11.415s 00:21:39.894 ************************************ 00:21:39.894 END TEST nvmf_digest 00:21:39.894 ************************************ 00:21:39.894 16:14:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:39.894 16:14:09 -- common/autotest_common.sh@10 -- # set +x 00:21:39.894 16:14:09 -- nvmf/nvmf.sh@108 -- # [[ 0 -eq 1 ]] 00:21:39.894 16:14:09 -- nvmf/nvmf.sh@113 -- # [[ 1 -eq 1 ]] 00:21:39.894 16:14:09 -- nvmf/nvmf.sh@114 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:21:39.894 16:14:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:39.894 16:14:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:39.894 16:14:09 -- common/autotest_common.sh@10 -- # set +x 00:21:40.153 ************************************ 00:21:40.153 START TEST nvmf_multipath 00:21:40.153 ************************************ 00:21:40.153 16:14:09 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:21:40.153 * Looking for test storage... 00:21:40.153 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:40.153 16:14:10 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:40.153 16:14:10 -- nvmf/common.sh@7 -- # uname -s 00:21:40.153 16:14:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:40.153 16:14:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:40.153 16:14:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:40.153 16:14:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:40.153 16:14:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:40.153 16:14:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:40.153 16:14:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:40.153 16:14:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:40.153 16:14:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:40.153 16:14:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:40.153 16:14:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5cf30adc-e0ee-4255-9853-178d7d30983a 00:21:40.153 16:14:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=5cf30adc-e0ee-4255-9853-178d7d30983a 00:21:40.153 16:14:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:40.153 16:14:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:40.153 16:14:10 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:40.153 16:14:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:40.153 16:14:10 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:40.153 16:14:10 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:40.153 16:14:10 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:40.153 16:14:10 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:40.153 16:14:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.153 16:14:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.153 16:14:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.153 16:14:10 -- paths/export.sh@5 -- # export PATH 00:21:40.154 16:14:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.154 16:14:10 -- nvmf/common.sh@47 -- # : 0 00:21:40.154 16:14:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:40.154 16:14:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:40.154 16:14:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:40.154 16:14:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:40.154 16:14:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:40.154 16:14:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:40.154 16:14:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:40.154 16:14:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:40.154 16:14:10 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:40.154 16:14:10 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:40.154 16:14:10 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:40.154 16:14:10 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:21:40.154 16:14:10 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:40.154 16:14:10 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:21:40.154 16:14:10 -- host/multipath.sh@30 -- # nvmftestinit 00:21:40.154 16:14:10 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:40.154 16:14:10 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:40.154 16:14:10 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:40.154 16:14:10 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:40.154 16:14:10 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:40.154 16:14:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:40.154 16:14:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:40.154 16:14:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:40.154 16:14:10 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:21:40.154 16:14:10 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:21:40.154 16:14:10 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:21:40.154 16:14:10 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:21:40.154 16:14:10 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:21:40.154 16:14:10 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:21:40.154 16:14:10 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:40.154 16:14:10 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:40.154 16:14:10 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:40.154 16:14:10 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:40.154 16:14:10 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:40.154 16:14:10 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:40.154 16:14:10 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:40.154 16:14:10 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:40.154 16:14:10 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:40.154 16:14:10 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:40.154 16:14:10 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:40.154 16:14:10 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:40.154 16:14:10 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:40.154 16:14:10 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:40.412 Cannot find device "nvmf_tgt_br" 00:21:40.412 16:14:10 -- nvmf/common.sh@155 -- # true 00:21:40.412 16:14:10 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:40.412 Cannot find device "nvmf_tgt_br2" 00:21:40.412 16:14:10 -- nvmf/common.sh@156 -- # true 00:21:40.412 16:14:10 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:40.412 16:14:10 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:40.412 Cannot find device "nvmf_tgt_br" 00:21:40.412 16:14:10 -- nvmf/common.sh@158 -- # true 00:21:40.412 16:14:10 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:40.412 Cannot find device "nvmf_tgt_br2" 00:21:40.412 16:14:10 -- nvmf/common.sh@159 -- # true 00:21:40.412 16:14:10 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:40.412 16:14:10 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:40.412 16:14:10 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:40.412 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:40.412 16:14:10 -- nvmf/common.sh@162 -- # true 00:21:40.412 16:14:10 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:40.412 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:40.412 16:14:10 -- nvmf/common.sh@163 -- # true 00:21:40.412 16:14:10 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:40.412 16:14:10 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:40.412 16:14:10 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:40.413 16:14:10 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:40.413 16:14:10 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:40.413 16:14:10 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:40.413 16:14:10 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:40.413 16:14:10 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:40.413 16:14:10 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:40.413 16:14:10 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:40.413 16:14:10 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:40.413 16:14:10 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:40.413 16:14:10 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:40.671 16:14:10 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:40.671 16:14:10 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:40.671 16:14:10 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:40.671 16:14:10 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:40.671 16:14:10 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:40.671 16:14:10 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:40.671 16:14:10 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:40.671 16:14:10 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:40.671 16:14:10 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:40.671 16:14:10 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:40.671 16:14:10 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:40.671 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:40.671 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:21:40.671 00:21:40.671 --- 10.0.0.2 ping statistics --- 00:21:40.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.671 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:21:40.671 16:14:10 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:40.671 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:40.671 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:21:40.671 00:21:40.671 --- 10.0.0.3 ping statistics --- 00:21:40.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.671 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:21:40.671 16:14:10 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:40.671 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:40.671 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:21:40.671 00:21:40.671 --- 10.0.0.1 ping statistics --- 00:21:40.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.671 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:21:40.671 16:14:10 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:40.671 16:14:10 -- nvmf/common.sh@422 -- # return 0 00:21:40.671 16:14:10 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:40.671 16:14:10 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:40.671 16:14:10 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:40.671 16:14:10 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:40.671 16:14:10 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:40.671 16:14:10 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:40.671 16:14:10 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:40.671 16:14:10 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:21:40.671 16:14:10 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:40.671 16:14:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:40.671 16:14:10 -- common/autotest_common.sh@10 -- # set +x 00:21:40.671 16:14:10 -- nvmf/common.sh@470 -- # nvmfpid=91956 00:21:40.671 16:14:10 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:40.671 16:14:10 -- nvmf/common.sh@471 -- # waitforlisten 91956 00:21:40.671 16:14:10 -- common/autotest_common.sh@817 -- # '[' -z 91956 ']' 00:21:40.671 16:14:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:40.671 16:14:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:40.671 16:14:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:40.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:40.671 16:14:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:40.671 16:14:10 -- common/autotest_common.sh@10 -- # set +x 00:21:40.671 [2024-04-15 16:14:10.571687] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:21:40.671 [2024-04-15 16:14:10.571973] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:40.930 [2024-04-15 16:14:10.712710] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:40.930 [2024-04-15 16:14:10.779048] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:40.930 [2024-04-15 16:14:10.779337] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:40.930 [2024-04-15 16:14:10.779513] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:40.930 [2024-04-15 16:14:10.779608] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:40.930 [2024-04-15 16:14:10.779716] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:40.930 [2024-04-15 16:14:10.779883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:40.930 [2024-04-15 16:14:10.779890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:41.863 16:14:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:41.863 16:14:11 -- common/autotest_common.sh@850 -- # return 0 00:21:41.863 16:14:11 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:41.863 16:14:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:41.863 16:14:11 -- common/autotest_common.sh@10 -- # set +x 00:21:41.863 16:14:11 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:41.863 16:14:11 -- host/multipath.sh@33 -- # nvmfapp_pid=91956 00:21:41.863 16:14:11 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:41.863 [2024-04-15 16:14:11.722048] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:41.863 16:14:11 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:42.121 Malloc0 00:21:42.121 16:14:11 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:21:42.378 16:14:12 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:42.650 16:14:12 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:42.650 [2024-04-15 16:14:12.575280] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:42.650 16:14:12 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:42.908 [2024-04-15 16:14:12.839458] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:42.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:42.908 16:14:12 -- host/multipath.sh@44 -- # bdevperf_pid=92006 00:21:42.908 16:14:12 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:42.908 16:14:12 -- host/multipath.sh@47 -- # waitforlisten 92006 /var/tmp/bdevperf.sock 00:21:42.908 16:14:12 -- common/autotest_common.sh@817 -- # '[' -z 92006 ']' 00:21:42.908 16:14:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:42.908 16:14:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:42.908 16:14:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:42.909 16:14:12 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:21:42.909 16:14:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:42.909 16:14:12 -- common/autotest_common.sh@10 -- # set +x 00:21:44.282 16:14:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:44.282 16:14:13 -- common/autotest_common.sh@850 -- # return 0 00:21:44.282 16:14:13 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:44.282 16:14:14 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:21:44.541 Nvme0n1 00:21:44.541 16:14:14 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:45.108 Nvme0n1 00:21:45.108 16:14:14 -- host/multipath.sh@78 -- # sleep 1 00:21:45.108 16:14:14 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:21:46.041 16:14:15 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:21:46.041 16:14:15 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:46.300 16:14:16 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:46.558 16:14:16 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:21:46.558 16:14:16 -- host/multipath.sh@65 -- # dtrace_pid=92057 00:21:46.558 16:14:16 -- host/multipath.sh@66 -- # sleep 6 00:21:46.558 16:14:16 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 91956 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:53.171 16:14:22 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:53.171 16:14:22 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:53.171 16:14:22 -- host/multipath.sh@67 -- # active_port=4421 00:21:53.171 16:14:22 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:53.171 Attaching 4 probes... 00:21:53.171 @path[10.0.0.2, 4421]: 18872 00:21:53.171 @path[10.0.0.2, 4421]: 17158 00:21:53.171 @path[10.0.0.2, 4421]: 18951 00:21:53.171 @path[10.0.0.2, 4421]: 19424 00:21:53.171 @path[10.0.0.2, 4421]: 19189 00:21:53.171 16:14:22 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:53.171 16:14:22 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:53.171 16:14:22 -- host/multipath.sh@69 -- # sed -n 1p 00:21:53.171 16:14:22 -- host/multipath.sh@69 -- # port=4421 00:21:53.171 16:14:22 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:53.171 16:14:22 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:53.171 16:14:22 -- host/multipath.sh@72 -- # kill 92057 00:21:53.171 16:14:22 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:53.171 16:14:22 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:21:53.171 16:14:22 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:53.171 16:14:23 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:21:53.428 16:14:23 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:21:53.428 16:14:23 -- host/multipath.sh@65 -- # dtrace_pid=92175 00:21:53.428 16:14:23 -- host/multipath.sh@66 -- # sleep 6 00:21:53.428 16:14:23 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 91956 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:59.995 16:14:29 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:59.995 16:14:29 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:21:59.995 16:14:29 -- host/multipath.sh@67 -- # active_port=4420 00:21:59.995 16:14:29 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:59.995 Attaching 4 probes... 00:21:59.995 @path[10.0.0.2, 4420]: 19469 00:21:59.995 @path[10.0.0.2, 4420]: 19500 00:21:59.995 @path[10.0.0.2, 4420]: 19433 00:21:59.995 @path[10.0.0.2, 4420]: 19763 00:21:59.995 @path[10.0.0.2, 4420]: 16523 00:21:59.995 16:14:29 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:59.995 16:14:29 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:59.995 16:14:29 -- host/multipath.sh@69 -- # sed -n 1p 00:21:59.995 16:14:29 -- host/multipath.sh@69 -- # port=4420 00:21:59.995 16:14:29 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:21:59.995 16:14:29 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:21:59.995 16:14:29 -- host/multipath.sh@72 -- # kill 92175 00:21:59.995 16:14:29 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:59.995 16:14:29 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:21:59.995 16:14:29 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:21:59.995 16:14:29 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:00.253 16:14:30 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:22:00.253 16:14:30 -- host/multipath.sh@65 -- # dtrace_pid=92282 00:22:00.253 16:14:30 -- host/multipath.sh@66 -- # sleep 6 00:22:00.253 16:14:30 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 91956 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:06.815 16:14:36 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:06.815 16:14:36 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:06.815 16:14:36 -- host/multipath.sh@67 -- # active_port=4421 00:22:06.815 16:14:36 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:06.815 Attaching 4 probes... 00:22:06.815 @path[10.0.0.2, 4421]: 15180 00:22:06.815 @path[10.0.0.2, 4421]: 18946 00:22:06.815 @path[10.0.0.2, 4421]: 19276 00:22:06.815 @path[10.0.0.2, 4421]: 18426 00:22:06.815 @path[10.0.0.2, 4421]: 18416 00:22:06.815 16:14:36 -- host/multipath.sh@69 -- # sed -n 1p 00:22:06.815 16:14:36 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:06.815 16:14:36 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:22:06.815 16:14:36 -- host/multipath.sh@69 -- # port=4421 00:22:06.815 16:14:36 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:06.815 16:14:36 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:06.815 16:14:36 -- host/multipath.sh@72 -- # kill 92282 00:22:06.815 16:14:36 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:06.815 16:14:36 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:22:06.815 16:14:36 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:06.815 16:14:36 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:07.072 16:14:36 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:22:07.072 16:14:36 -- host/multipath.sh@65 -- # dtrace_pid=92400 00:22:07.072 16:14:36 -- host/multipath.sh@66 -- # sleep 6 00:22:07.072 16:14:36 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 91956 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:13.631 16:14:42 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:13.631 16:14:42 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:22:13.631 16:14:43 -- host/multipath.sh@67 -- # active_port= 00:22:13.631 16:14:43 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:13.631 Attaching 4 probes... 00:22:13.631 00:22:13.631 00:22:13.631 00:22:13.631 00:22:13.631 00:22:13.631 16:14:43 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:13.631 16:14:43 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:22:13.631 16:14:43 -- host/multipath.sh@69 -- # sed -n 1p 00:22:13.631 16:14:43 -- host/multipath.sh@69 -- # port= 00:22:13.631 16:14:43 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:22:13.631 16:14:43 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:22:13.631 16:14:43 -- host/multipath.sh@72 -- # kill 92400 00:22:13.631 16:14:43 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:13.631 16:14:43 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:22:13.631 16:14:43 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:13.631 16:14:43 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:14.196 16:14:43 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:22:14.196 16:14:43 -- host/multipath.sh@65 -- # dtrace_pid=92518 00:22:14.196 16:14:43 -- host/multipath.sh@66 -- # sleep 6 00:22:14.196 16:14:43 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 91956 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:20.771 16:14:49 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:20.771 16:14:49 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:20.771 16:14:50 -- host/multipath.sh@67 -- # active_port=4421 00:22:20.771 16:14:50 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:20.771 Attaching 4 probes... 00:22:20.771 @path[10.0.0.2, 4421]: 17779 00:22:20.771 @path[10.0.0.2, 4421]: 17976 00:22:20.771 @path[10.0.0.2, 4421]: 17936 00:22:20.771 @path[10.0.0.2, 4421]: 17704 00:22:20.771 @path[10.0.0.2, 4421]: 17740 00:22:20.771 16:14:50 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:20.771 16:14:50 -- host/multipath.sh@69 -- # sed -n 1p 00:22:20.771 16:14:50 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:22:20.771 16:14:50 -- host/multipath.sh@69 -- # port=4421 00:22:20.771 16:14:50 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:20.771 16:14:50 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:20.771 16:14:50 -- host/multipath.sh@72 -- # kill 92518 00:22:20.771 16:14:50 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:20.771 16:14:50 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:20.771 [2024-04-15 16:14:50.425672] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b0d20 is same with the state(5) to be set 00:22:20.771 [2024-04-15 16:14:50.425982] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b0d20 is same with the state(5) to be set 00:22:20.771 [2024-04-15 16:14:50.426108] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b0d20 is same with the state(5) to be set 00:22:20.771 [2024-04-15 16:14:50.426213] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b0d20 is same with the state(5) to be set 00:22:20.771 [2024-04-15 16:14:50.426270] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b0d20 is same with the state(5) to be set 00:22:20.771 [2024-04-15 16:14:50.426363] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b0d20 is same with the state(5) to be set 00:22:20.771 [2024-04-15 16:14:50.426462] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b0d20 is same with the state(5) to be set 00:22:20.771 [2024-04-15 16:14:50.426553] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b0d20 is same with the state(5) to be set 00:22:20.771 [2024-04-15 16:14:50.426673] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b0d20 is same with the state(5) to be set 00:22:20.771 [2024-04-15 16:14:50.426730] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b0d20 is same with the state(5) to be set 00:22:20.771 [2024-04-15 16:14:50.426822] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b0d20 is same with the state(5) to be set 00:22:20.771 [2024-04-15 16:14:50.427033] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b0d20 is same with the state(5) to be set 00:22:20.771 [2024-04-15 16:14:50.427086] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b0d20 is same with the state(5) to be set 00:22:20.771 [2024-04-15 16:14:50.427236] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b0d20 is same with the state(5) to be set 00:22:20.771 [2024-04-15 16:14:50.427289] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b0d20 is same with the state(5) to be set 00:22:20.771 [2024-04-15 16:14:50.427343] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b0d20 is same with the state(5) to be set 00:22:20.771 [2024-04-15 16:14:50.427473] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b0d20 is same with the state(5) to be set 00:22:20.771 [2024-04-15 16:14:50.427526] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b0d20 is same with the state(5) to be set 00:22:20.771 [2024-04-15 16:14:50.427578] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b0d20 is same with the state(5) to be set 00:22:20.771 [2024-04-15 16:14:50.427713] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b0d20 is same with the state(5) to be set 00:22:20.771 [2024-04-15 16:14:50.427766] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b0d20 is same with the state(5) to be set 00:22:20.771 [2024-04-15 16:14:50.427818] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b0d20 is same with the state(5) to be set 00:22:20.771 [2024-04-15 16:14:50.427923] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b0d20 is same with the state(5) to be set 00:22:20.771 [2024-04-15 16:14:50.427983] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b0d20 is same with the state(5) to be set 00:22:20.771 [2024-04-15 16:14:50.428035] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b0d20 is same with the state(5) to be set 00:22:20.771 [2024-04-15 16:14:50.428088] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b0d20 is same with the state(5) to be set 00:22:20.771 [2024-04-15 16:14:50.428199] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b0d20 is same with the state(5) to be set 00:22:20.771 [2024-04-15 16:14:50.428251] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b0d20 is same with the state(5) to be set 00:22:20.771 [2024-04-15 16:14:50.428304] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b0d20 is same with the state(5) to be set 00:22:20.771 [2024-04-15 16:14:50.428407] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b0d20 is same with the state(5) to be set 00:22:20.771 [2024-04-15 16:14:50.428461] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b0d20 is same with the state(5) to be set 00:22:20.771 [2024-04-15 16:14:50.428557] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b0d20 is same with the state(5) to be set 00:22:20.771 [2024-04-15 16:14:50.428633] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b0d20 is same with the state(5) to be set 00:22:20.771 [2024-04-15 16:14:50.428731] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b0d20 is same with the state(5) to be set 00:22:20.771 [2024-04-15 16:14:50.428785] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b0d20 is same with the state(5) to be set 00:22:20.771 [2024-04-15 16:14:50.428880] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b0d20 is same with the state(5) to be set 00:22:20.771 [2024-04-15 16:14:50.428940] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b0d20 is same with the state(5) to be set 00:22:20.771 [2024-04-15 16:14:50.428992] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b0d20 is same with the state(5) to be set 00:22:20.771 [2024-04-15 16:14:50.429130] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b0d20 is same with the state(5) to be set 00:22:20.771 16:14:50 -- host/multipath.sh@101 -- # sleep 1 00:22:21.716 16:14:51 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:22:21.716 16:14:51 -- host/multipath.sh@65 -- # dtrace_pid=92636 00:22:21.716 16:14:51 -- host/multipath.sh@66 -- # sleep 6 00:22:21.716 16:14:51 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 91956 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:28.293 16:14:57 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:28.293 16:14:57 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:22:28.293 16:14:57 -- host/multipath.sh@67 -- # active_port=4420 00:22:28.293 16:14:57 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:28.293 Attaching 4 probes... 00:22:28.293 @path[10.0.0.2, 4420]: 17887 00:22:28.293 @path[10.0.0.2, 4420]: 18385 00:22:28.293 @path[10.0.0.2, 4420]: 18561 00:22:28.293 @path[10.0.0.2, 4420]: 18297 00:22:28.293 @path[10.0.0.2, 4420]: 18355 00:22:28.293 16:14:57 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:28.293 16:14:57 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:22:28.293 16:14:57 -- host/multipath.sh@69 -- # sed -n 1p 00:22:28.293 16:14:57 -- host/multipath.sh@69 -- # port=4420 00:22:28.293 16:14:57 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:22:28.293 16:14:57 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:22:28.293 16:14:57 -- host/multipath.sh@72 -- # kill 92636 00:22:28.293 16:14:57 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:28.293 16:14:57 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:28.293 [2024-04-15 16:14:58.067871] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:28.293 16:14:58 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:28.552 16:14:58 -- host/multipath.sh@111 -- # sleep 6 00:22:35.109 16:15:04 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:22:35.109 16:15:04 -- host/multipath.sh@65 -- # dtrace_pid=92816 00:22:35.109 16:15:04 -- host/multipath.sh@66 -- # sleep 6 00:22:35.109 16:15:04 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 91956 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:41.684 16:15:10 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:41.684 16:15:10 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:41.684 16:15:10 -- host/multipath.sh@67 -- # active_port=4421 00:22:41.684 16:15:10 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:41.684 Attaching 4 probes... 00:22:41.684 @path[10.0.0.2, 4421]: 17999 00:22:41.684 @path[10.0.0.2, 4421]: 18432 00:22:41.684 @path[10.0.0.2, 4421]: 18240 00:22:41.684 @path[10.0.0.2, 4421]: 18131 00:22:41.684 @path[10.0.0.2, 4421]: 18125 00:22:41.684 16:15:10 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:41.684 16:15:10 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:22:41.684 16:15:10 -- host/multipath.sh@69 -- # sed -n 1p 00:22:41.684 16:15:10 -- host/multipath.sh@69 -- # port=4421 00:22:41.684 16:15:10 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:41.684 16:15:10 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:41.684 16:15:10 -- host/multipath.sh@72 -- # kill 92816 00:22:41.684 16:15:10 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:41.684 16:15:10 -- host/multipath.sh@114 -- # killprocess 92006 00:22:41.684 16:15:10 -- common/autotest_common.sh@936 -- # '[' -z 92006 ']' 00:22:41.684 16:15:10 -- common/autotest_common.sh@940 -- # kill -0 92006 00:22:41.684 16:15:10 -- common/autotest_common.sh@941 -- # uname 00:22:41.684 16:15:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:41.684 16:15:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92006 00:22:41.684 killing process with pid 92006 00:22:41.684 16:15:10 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:22:41.684 16:15:10 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:22:41.684 16:15:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92006' 00:22:41.684 16:15:10 -- common/autotest_common.sh@955 -- # kill 92006 00:22:41.684 16:15:10 -- common/autotest_common.sh@960 -- # wait 92006 00:22:41.684 Connection closed with partial response: 00:22:41.684 00:22:41.684 00:22:41.684 16:15:10 -- host/multipath.sh@116 -- # wait 92006 00:22:41.684 16:15:10 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:41.684 [2024-04-15 16:14:12.909316] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:22:41.684 [2024-04-15 16:14:12.909445] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92006 ] 00:22:41.684 [2024-04-15 16:14:13.059586] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.684 [2024-04-15 16:14:13.115561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:41.684 Running I/O for 90 seconds... 00:22:41.684 [2024-04-15 16:14:23.324661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:113968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.684 [2024-04-15 16:14:23.324728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:41.684 [2024-04-15 16:14:23.324774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.684 [2024-04-15 16:14:23.324790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:41.685 [2024-04-15 16:14:23.324811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:113984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.685 [2024-04-15 16:14:23.324826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:41.685 [2024-04-15 16:14:23.324846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:113992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.685 [2024-04-15 16:14:23.324860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:41.685 [2024-04-15 16:14:23.324881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:114000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.685 [2024-04-15 16:14:23.324895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.685 [2024-04-15 16:14:23.324915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:114008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.685 [2024-04-15 16:14:23.324929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.685 [2024-04-15 16:14:23.324950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:114016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.685 [2024-04-15 16:14:23.324964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:41.685 [2024-04-15 16:14:23.324984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:114024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.685 [2024-04-15 16:14:23.324998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:41.685 [2024-04-15 16:14:23.325021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:114032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.685 [2024-04-15 16:14:23.325036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:41.685 [2024-04-15 16:14:23.325056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:114040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.685 [2024-04-15 16:14:23.325070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:41.685 [2024-04-15 16:14:23.325091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:114048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.685 [2024-04-15 16:14:23.325123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:41.685 [2024-04-15 16:14:23.325145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:114056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.685 [2024-04-15 16:14:23.325159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:41.685 [2024-04-15 16:14:23.325179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:114064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.685 [2024-04-15 16:14:23.325193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:41.685 [2024-04-15 16:14:23.325214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:114072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.685 [2024-04-15 16:14:23.325228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:41.685 [2024-04-15 16:14:23.325249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:114080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.685 [2024-04-15 16:14:23.325263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:41.685 [2024-04-15 16:14:23.325283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:114088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.685 [2024-04-15 16:14:23.325297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:41.685 [2024-04-15 16:14:23.325318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:113520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.685 [2024-04-15 16:14:23.325333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:41.685 [2024-04-15 16:14:23.325354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:113528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.685 [2024-04-15 16:14:23.325368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:41.685 [2024-04-15 16:14:23.325388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:113536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.685 [2024-04-15 16:14:23.325402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:41.685 [2024-04-15 16:14:23.325422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:113544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.685 [2024-04-15 16:14:23.325436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:41.685 [2024-04-15 16:14:23.325456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:113552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.685 [2024-04-15 16:14:23.325470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:41.685 [2024-04-15 16:14:23.325490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:113560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.685 [2024-04-15 16:14:23.325504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:41.685 [2024-04-15 16:14:23.325524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:113568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.685 [2024-04-15 16:14:23.325538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:41.685 [2024-04-15 16:14:23.325565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:113576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.685 [2024-04-15 16:14:23.325621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:41.685 [2024-04-15 16:14:23.326004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:114096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.685 [2024-04-15 16:14:23.326030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:41.685 [2024-04-15 16:14:23.326055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:114104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.685 [2024-04-15 16:14:23.326071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:41.685 [2024-04-15 16:14:23.326093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:114112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.685 [2024-04-15 16:14:23.326108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:41.685 [2024-04-15 16:14:23.326130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:114120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.685 [2024-04-15 16:14:23.326145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:41.685 [2024-04-15 16:14:23.326166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.685 [2024-04-15 16:14:23.326182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:41.685 [2024-04-15 16:14:23.326203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:114136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.685 [2024-04-15 16:14:23.326218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:41.685 [2024-04-15 16:14:23.326240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:114144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.685 [2024-04-15 16:14:23.326255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:41.685 [2024-04-15 16:14:23.326277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:114152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.685 [2024-04-15 16:14:23.326292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:41.685 [2024-04-15 16:14:23.326314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:114160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.685 [2024-04-15 16:14:23.326329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:41.685 [2024-04-15 16:14:23.326352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:114168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.685 [2024-04-15 16:14:23.326367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:41.685 [2024-04-15 16:14:23.326389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:114176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.685 [2024-04-15 16:14:23.326404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:41.685 [2024-04-15 16:14:23.326435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:114184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.685 [2024-04-15 16:14:23.326450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:41.685 [2024-04-15 16:14:23.326472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:114192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.685 [2024-04-15 16:14:23.326487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:41.685 [2024-04-15 16:14:23.326509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:114200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.685 [2024-04-15 16:14:23.326524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.685 [2024-04-15 16:14:23.326546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:114208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.685 [2024-04-15 16:14:23.326561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:41.685 [2024-04-15 16:14:23.326582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:114216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.685 [2024-04-15 16:14:23.326610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:41.685 [2024-04-15 16:14:23.326632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:113584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.686 [2024-04-15 16:14:23.326647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:41.686 [2024-04-15 16:14:23.326680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:113592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.686 [2024-04-15 16:14:23.326694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:41.686 [2024-04-15 16:14:23.326715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:113600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.686 [2024-04-15 16:14:23.326729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:41.686 [2024-04-15 16:14:23.326749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:113608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.686 [2024-04-15 16:14:23.326764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:41.686 [2024-04-15 16:14:23.326784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:113616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.686 [2024-04-15 16:14:23.326798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:41.686 [2024-04-15 16:14:23.326819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:113624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.686 [2024-04-15 16:14:23.326833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:41.686 [2024-04-15 16:14:23.326853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:113632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.686 [2024-04-15 16:14:23.326867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:41.686 [2024-04-15 16:14:23.326888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:113640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.686 [2024-04-15 16:14:23.326907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:41.686 [2024-04-15 16:14:23.326928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:113648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.686 [2024-04-15 16:14:23.326942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:41.686 [2024-04-15 16:14:23.326962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:113656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.686 [2024-04-15 16:14:23.326976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:41.686 [2024-04-15 16:14:23.326997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:113664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.686 [2024-04-15 16:14:23.327011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:41.686 [2024-04-15 16:14:23.327031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:113672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.686 [2024-04-15 16:14:23.327046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:41.686 [2024-04-15 16:14:23.327066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:113680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.686 [2024-04-15 16:14:23.327081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:41.686 [2024-04-15 16:14:23.327101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:113688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.686 [2024-04-15 16:14:23.327115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:41.686 [2024-04-15 16:14:23.327135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:113696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.686 [2024-04-15 16:14:23.327149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:41.686 [2024-04-15 16:14:23.327170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:113704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.686 [2024-04-15 16:14:23.327184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:41.686 [2024-04-15 16:14:23.327206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:113712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.686 [2024-04-15 16:14:23.327220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:41.686 [2024-04-15 16:14:23.327241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:113720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.686 [2024-04-15 16:14:23.327255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:41.686 [2024-04-15 16:14:23.327276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:113728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.686 [2024-04-15 16:14:23.327290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:41.686 [2024-04-15 16:14:23.327310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:113736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.686 [2024-04-15 16:14:23.327329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:41.686 [2024-04-15 16:14:23.327350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:113744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.686 [2024-04-15 16:14:23.327364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:41.686 [2024-04-15 16:14:23.327384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:113752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.686 [2024-04-15 16:14:23.327398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:41.686 [2024-04-15 16:14:23.327418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:113760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.686 [2024-04-15 16:14:23.327432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:41.686 [2024-04-15 16:14:23.327453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.686 [2024-04-15 16:14:23.327484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:41.686 [2024-04-15 16:14:23.327518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:114224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.686 [2024-04-15 16:14:23.327534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:41.686 [2024-04-15 16:14:23.327556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:114232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.686 [2024-04-15 16:14:23.327572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:41.686 [2024-04-15 16:14:23.327593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:114240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.686 [2024-04-15 16:14:23.327616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:41.686 [2024-04-15 16:14:23.327638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:114248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.686 [2024-04-15 16:14:23.327653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:41.686 [2024-04-15 16:14:23.327675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:114256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.686 [2024-04-15 16:14:23.327690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:41.686 [2024-04-15 16:14:23.327712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:114264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.686 [2024-04-15 16:14:23.327727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.686 [2024-04-15 16:14:23.327749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:114272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.686 [2024-04-15 16:14:23.327765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:41.686 [2024-04-15 16:14:23.327787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:114280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.686 [2024-04-15 16:14:23.327810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:41.686 [2024-04-15 16:14:23.327832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:114288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.686 [2024-04-15 16:14:23.327847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:41.686 [2024-04-15 16:14:23.327869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:114296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.686 [2024-04-15 16:14:23.327884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:41.686 [2024-04-15 16:14:23.327906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:114304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.686 [2024-04-15 16:14:23.327921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:41.686 [2024-04-15 16:14:23.327943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:114312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.686 [2024-04-15 16:14:23.327958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:41.686 [2024-04-15 16:14:23.327979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:114320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.686 [2024-04-15 16:14:23.327995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:41.686 [2024-04-15 16:14:23.328016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:114328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.686 [2024-04-15 16:14:23.328031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:41.686 [2024-04-15 16:14:23.328053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:113776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.686 [2024-04-15 16:14:23.328068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:41.687 [2024-04-15 16:14:23.328090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:113784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.687 [2024-04-15 16:14:23.328105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:41.687 [2024-04-15 16:14:23.328126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:113792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.687 [2024-04-15 16:14:23.328144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:41.687 [2024-04-15 16:14:23.328166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:113800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.687 [2024-04-15 16:14:23.328181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:41.687 [2024-04-15 16:14:23.328202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:113808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.687 [2024-04-15 16:14:23.328218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:41.687 [2024-04-15 16:14:23.328239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:113816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.687 [2024-04-15 16:14:23.328254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:41.687 [2024-04-15 16:14:23.328292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:113824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.687 [2024-04-15 16:14:23.328307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:41.687 [2024-04-15 16:14:23.328327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:113832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.687 [2024-04-15 16:14:23.328341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:41.687 [2024-04-15 16:14:23.328361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:114336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.687 [2024-04-15 16:14:23.328375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:41.687 [2024-04-15 16:14:23.328395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:114344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.687 [2024-04-15 16:14:23.328410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:41.687 [2024-04-15 16:14:23.328430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:114352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.687 [2024-04-15 16:14:23.328444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:41.687 [2024-04-15 16:14:23.328464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:114360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.687 [2024-04-15 16:14:23.328478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:41.687 [2024-04-15 16:14:23.328498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:114368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.687 [2024-04-15 16:14:23.328512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:41.687 [2024-04-15 16:14:23.328533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:114376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.687 [2024-04-15 16:14:23.328547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:41.687 [2024-04-15 16:14:23.328567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:114384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.687 [2024-04-15 16:14:23.328581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:41.687 [2024-04-15 16:14:23.328608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:114392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.687 [2024-04-15 16:14:23.328623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:41.687 [2024-04-15 16:14:23.328643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:114400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.687 [2024-04-15 16:14:23.328657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:41.687 [2024-04-15 16:14:23.328677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:114408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.687 [2024-04-15 16:14:23.328691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:41.687 [2024-04-15 16:14:23.328727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:114416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.687 [2024-04-15 16:14:23.328744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:41.687 [2024-04-15 16:14:23.328764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:114424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.687 [2024-04-15 16:14:23.328779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:41.687 [2024-04-15 16:14:23.328799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:114432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.687 [2024-04-15 16:14:23.328813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:41.687 [2024-04-15 16:14:23.328833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:114440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.687 [2024-04-15 16:14:23.328847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:41.687 [2024-04-15 16:14:23.328868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:114448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.687 [2024-04-15 16:14:23.328881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:41.687 [2024-04-15 16:14:23.328902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:114456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.687 [2024-04-15 16:14:23.328916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.687 [2024-04-15 16:14:23.328936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:114464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.687 [2024-04-15 16:14:23.328950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:41.687 [2024-04-15 16:14:23.328970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:114472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.687 [2024-04-15 16:14:23.328985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:41.687 [2024-04-15 16:14:23.329006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:113840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.687 [2024-04-15 16:14:23.329020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:41.687 [2024-04-15 16:14:23.329040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:113848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.687 [2024-04-15 16:14:23.329054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:41.687 [2024-04-15 16:14:23.329074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:113856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.687 [2024-04-15 16:14:23.329089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:41.687 [2024-04-15 16:14:23.329109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:113864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.687 [2024-04-15 16:14:23.329124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:41.687 [2024-04-15 16:14:23.329144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:113872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.687 [2024-04-15 16:14:23.329164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:41.687 [2024-04-15 16:14:23.329184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:113880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.687 [2024-04-15 16:14:23.329198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:41.687 [2024-04-15 16:14:23.329219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:113888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.687 [2024-04-15 16:14:23.329233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:41.687 [2024-04-15 16:14:23.329253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:113896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.687 [2024-04-15 16:14:23.329268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:41.687 [2024-04-15 16:14:23.329289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:113904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.687 [2024-04-15 16:14:23.329306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:41.687 [2024-04-15 16:14:23.329326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:113912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.687 [2024-04-15 16:14:23.329340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:41.687 [2024-04-15 16:14:23.329360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:113920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.687 [2024-04-15 16:14:23.329375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:41.687 [2024-04-15 16:14:23.329396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:113928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.687 [2024-04-15 16:14:23.329410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:41.687 [2024-04-15 16:14:23.329430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:113936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.687 [2024-04-15 16:14:23.329444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:41.687 [2024-04-15 16:14:23.329464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:113944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.688 [2024-04-15 16:14:23.329478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:41.688 [2024-04-15 16:14:23.329499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:113952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.688 [2024-04-15 16:14:23.329513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:41.688 [2024-04-15 16:14:23.329533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:113960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.688 [2024-04-15 16:14:23.329547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:41.688 [2024-04-15 16:14:23.329567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:114480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.688 [2024-04-15 16:14:23.329603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:41.688 [2024-04-15 16:14:23.329624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:114488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.688 [2024-04-15 16:14:23.329655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:41.688 [2024-04-15 16:14:23.329677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:114496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.688 [2024-04-15 16:14:23.329692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:41.688 [2024-04-15 16:14:23.329714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:114504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.688 [2024-04-15 16:14:23.329729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:41.688 [2024-04-15 16:14:23.329750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.688 [2024-04-15 16:14:23.329765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:41.688 [2024-04-15 16:14:23.329787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:114520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.688 [2024-04-15 16:14:23.329802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:41.688 [2024-04-15 16:14:23.329824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:114528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.688 [2024-04-15 16:14:23.329839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:41.688 [2024-04-15 16:14:23.329861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:114536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.688 [2024-04-15 16:14:23.329876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:41.688 [2024-04-15 16:14:29.853164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:83104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.688 [2024-04-15 16:14:29.853242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:41.688 [2024-04-15 16:14:29.853310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:83112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.688 [2024-04-15 16:14:29.853328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:41.688 [2024-04-15 16:14:29.853350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:83120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.688 [2024-04-15 16:14:29.853366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:41.688 [2024-04-15 16:14:29.853388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:83128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.688 [2024-04-15 16:14:29.853403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:41.688 [2024-04-15 16:14:29.853425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:83136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.688 [2024-04-15 16:14:29.853458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:41.688 [2024-04-15 16:14:29.853481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:83144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.688 [2024-04-15 16:14:29.853496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:41.688 [2024-04-15 16:14:29.853518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:83152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.688 [2024-04-15 16:14:29.853533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:41.688 [2024-04-15 16:14:29.853555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:83160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.688 [2024-04-15 16:14:29.853571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:41.688 [2024-04-15 16:14:29.853614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:82592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.688 [2024-04-15 16:14:29.853630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:41.688 [2024-04-15 16:14:29.853653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:82600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.688 [2024-04-15 16:14:29.853668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:41.688 [2024-04-15 16:14:29.853690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:82608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.688 [2024-04-15 16:14:29.853705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:41.688 [2024-04-15 16:14:29.853728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:82616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.688 [2024-04-15 16:14:29.853743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:41.688 [2024-04-15 16:14:29.853766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:82624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.688 [2024-04-15 16:14:29.853781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:41.688 [2024-04-15 16:14:29.853803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:82632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.688 [2024-04-15 16:14:29.853818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:41.688 [2024-04-15 16:14:29.853840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:82640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.688 [2024-04-15 16:14:29.853856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:41.688 [2024-04-15 16:14:29.853878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:82648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.688 [2024-04-15 16:14:29.853892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:41.688 [2024-04-15 16:14:29.853914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:82656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.688 [2024-04-15 16:14:29.853930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.688 [2024-04-15 16:14:29.853962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:82664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.688 [2024-04-15 16:14:29.853978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.688 [2024-04-15 16:14:29.854000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:82672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.688 [2024-04-15 16:14:29.854015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:41.688 [2024-04-15 16:14:29.854038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:82680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.688 [2024-04-15 16:14:29.854054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:41.688 [2024-04-15 16:14:29.854076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.688 [2024-04-15 16:14:29.854091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:41.688 [2024-04-15 16:14:29.854114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:82696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.688 [2024-04-15 16:14:29.854129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:41.688 [2024-04-15 16:14:29.854152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:82704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.688 [2024-04-15 16:14:29.854167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:41.688 [2024-04-15 16:14:29.854190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:82712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.688 [2024-04-15 16:14:29.854205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:41.688 [2024-04-15 16:14:29.854228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:82720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.688 [2024-04-15 16:14:29.854243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:41.688 [2024-04-15 16:14:29.854265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:82728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.688 [2024-04-15 16:14:29.854281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:41.688 [2024-04-15 16:14:29.854303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.688 [2024-04-15 16:14:29.854320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:41.688 [2024-04-15 16:14:29.854343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:82744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.688 [2024-04-15 16:14:29.854358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:41.688 [2024-04-15 16:14:29.854380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:82752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.689 [2024-04-15 16:14:29.854396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:41.689 [2024-04-15 16:14:29.854428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:82760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.689 [2024-04-15 16:14:29.854444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:41.689 [2024-04-15 16:14:29.854467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:82768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.689 [2024-04-15 16:14:29.854483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:41.689 [2024-04-15 16:14:29.854505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.689 [2024-04-15 16:14:29.854521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:41.689 [2024-04-15 16:14:29.854547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:83168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.689 [2024-04-15 16:14:29.854563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:41.689 [2024-04-15 16:14:29.854596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:83176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.689 [2024-04-15 16:14:29.854625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:41.689 [2024-04-15 16:14:29.854648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:83184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.689 [2024-04-15 16:14:29.854663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:41.689 [2024-04-15 16:14:29.854686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:83192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.689 [2024-04-15 16:14:29.854702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:41.689 [2024-04-15 16:14:29.854724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:83200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.689 [2024-04-15 16:14:29.854739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:41.689 [2024-04-15 16:14:29.854761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.689 [2024-04-15 16:14:29.854777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:41.689 [2024-04-15 16:14:29.854800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:83216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.689 [2024-04-15 16:14:29.854815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:41.689 [2024-04-15 16:14:29.854848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:83224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.689 [2024-04-15 16:14:29.854863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:41.689 [2024-04-15 16:14:29.854885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:83232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.689 [2024-04-15 16:14:29.854900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:41.689 [2024-04-15 16:14:29.854921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:83240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.689 [2024-04-15 16:14:29.854941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:41.689 [2024-04-15 16:14:29.854963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:83248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.689 [2024-04-15 16:14:29.854978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:41.689 [2024-04-15 16:14:29.855000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:83256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.689 [2024-04-15 16:14:29.855016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:41.689 [2024-04-15 16:14:29.855037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:83264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.689 [2024-04-15 16:14:29.855052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:41.689 [2024-04-15 16:14:29.855074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:83272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.689 [2024-04-15 16:14:29.855089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:41.689 [2024-04-15 16:14:29.855110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:83280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.689 [2024-04-15 16:14:29.855126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:41.689 [2024-04-15 16:14:29.855147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:83288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.689 [2024-04-15 16:14:29.855162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:41.689 [2024-04-15 16:14:29.855184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.689 [2024-04-15 16:14:29.855199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:41.689 [2024-04-15 16:14:29.855221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.689 [2024-04-15 16:14:29.855236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.689 [2024-04-15 16:14:29.855258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:82800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.689 [2024-04-15 16:14:29.855273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:41.689 [2024-04-15 16:14:29.855294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:82808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.689 [2024-04-15 16:14:29.855309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:41.689 [2024-04-15 16:14:29.855330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:82816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.689 [2024-04-15 16:14:29.855346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:41.689 [2024-04-15 16:14:29.855367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:82824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.689 [2024-04-15 16:14:29.855384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:41.689 [2024-04-15 16:14:29.855407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:82832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.689 [2024-04-15 16:14:29.855423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:41.689 [2024-04-15 16:14:29.855445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:82840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.689 [2024-04-15 16:14:29.855459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:41.689 [2024-04-15 16:14:29.855481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:83296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.689 [2024-04-15 16:14:29.855496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:41.689 [2024-04-15 16:14:29.855518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:83304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.689 [2024-04-15 16:14:29.855533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:41.689 [2024-04-15 16:14:29.855555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:83312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.689 [2024-04-15 16:14:29.855570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:41.689 [2024-04-15 16:14:29.855599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:83320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.689 [2024-04-15 16:14:29.855615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:41.690 [2024-04-15 16:14:29.855636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:83328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.690 [2024-04-15 16:14:29.855651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:41.690 [2024-04-15 16:14:29.855672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.690 [2024-04-15 16:14:29.855687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:41.690 [2024-04-15 16:14:29.855708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:83344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.690 [2024-04-15 16:14:29.855724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:41.690 [2024-04-15 16:14:29.855746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:83352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.690 [2024-04-15 16:14:29.855760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:41.690 [2024-04-15 16:14:29.855799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:83360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.690 [2024-04-15 16:14:29.855816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:41.690 [2024-04-15 16:14:29.855838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.690 [2024-04-15 16:14:29.855853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:41.690 [2024-04-15 16:14:29.855880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:83376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.690 [2024-04-15 16:14:29.855895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:41.690 [2024-04-15 16:14:29.855917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:83384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.690 [2024-04-15 16:14:29.855932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:41.690 [2024-04-15 16:14:29.855953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:83392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.690 [2024-04-15 16:14:29.855968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:41.690 [2024-04-15 16:14:29.855990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:83400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.690 [2024-04-15 16:14:29.856005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:41.690 [2024-04-15 16:14:29.856026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:83408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.690 [2024-04-15 16:14:29.856041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:41.690 [2024-04-15 16:14:29.856063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:83416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.690 [2024-04-15 16:14:29.856078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:41.690 [2024-04-15 16:14:29.856099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:82848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.690 [2024-04-15 16:14:29.856114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:41.690 [2024-04-15 16:14:29.856136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:82856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.690 [2024-04-15 16:14:29.856151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:41.690 [2024-04-15 16:14:29.856173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:82864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.690 [2024-04-15 16:14:29.856188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:41.690 [2024-04-15 16:14:29.856209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.690 [2024-04-15 16:14:29.856224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:41.690 [2024-04-15 16:14:29.856246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:82880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.690 [2024-04-15 16:14:29.856261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:41.690 [2024-04-15 16:14:29.856282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.690 [2024-04-15 16:14:29.856297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:41.690 [2024-04-15 16:14:29.856324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:82896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.690 [2024-04-15 16:14:29.856339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:41.690 [2024-04-15 16:14:29.856361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:82904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.690 [2024-04-15 16:14:29.856376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:41.690 [2024-04-15 16:14:29.856400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:83424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.690 [2024-04-15 16:14:29.856415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:41.690 [2024-04-15 16:14:29.856437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:83432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.690 [2024-04-15 16:14:29.856452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.690 [2024-04-15 16:14:29.856474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:83440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.690 [2024-04-15 16:14:29.856489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:41.690 [2024-04-15 16:14:29.856510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:83448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.690 [2024-04-15 16:14:29.856525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:41.690 [2024-04-15 16:14:29.856547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:83456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.690 [2024-04-15 16:14:29.856562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:41.690 [2024-04-15 16:14:29.856592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:83464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.690 [2024-04-15 16:14:29.856608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:41.690 [2024-04-15 16:14:29.856630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.690 [2024-04-15 16:14:29.856645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:41.690 [2024-04-15 16:14:29.856666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:83480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.690 [2024-04-15 16:14:29.856681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:41.690 [2024-04-15 16:14:29.856703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:83488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.690 [2024-04-15 16:14:29.856718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:41.690 [2024-04-15 16:14:29.856739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:83496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.690 [2024-04-15 16:14:29.856754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:41.690 [2024-04-15 16:14:29.856776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:83504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.690 [2024-04-15 16:14:29.856796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:41.690 [2024-04-15 16:14:29.856818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:83512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.690 [2024-04-15 16:14:29.856833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:41.690 [2024-04-15 16:14:29.856854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:83520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.690 [2024-04-15 16:14:29.856869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:41.690 [2024-04-15 16:14:29.856891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:83528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.690 [2024-04-15 16:14:29.856906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:41.690 [2024-04-15 16:14:29.856931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:83536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.690 [2024-04-15 16:14:29.856946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:41.690 [2024-04-15 16:14:29.856967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:83544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.690 [2024-04-15 16:14:29.856983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:41.690 [2024-04-15 16:14:29.857004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:82912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.690 [2024-04-15 16:14:29.857020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:41.690 [2024-04-15 16:14:29.857041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:82920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.690 [2024-04-15 16:14:29.857056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:41.690 [2024-04-15 16:14:29.857078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:82928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.690 [2024-04-15 16:14:29.857093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:41.690 [2024-04-15 16:14:29.857114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:82936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.691 [2024-04-15 16:14:29.857130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:41.691 [2024-04-15 16:14:29.857151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:82944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.691 [2024-04-15 16:14:29.857166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:41.691 [2024-04-15 16:14:29.857187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.691 [2024-04-15 16:14:29.857202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:41.691 [2024-04-15 16:14:29.857224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:82960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.691 [2024-04-15 16:14:29.857243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:41.691 [2024-04-15 16:14:29.857265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:82968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.691 [2024-04-15 16:14:29.857280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:41.691 [2024-04-15 16:14:29.857301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:82976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.691 [2024-04-15 16:14:29.857316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:41.691 [2024-04-15 16:14:29.857338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:82984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.691 [2024-04-15 16:14:29.857353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:41.691 [2024-04-15 16:14:29.857375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:82992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.691 [2024-04-15 16:14:29.857390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:41.691 [2024-04-15 16:14:29.857413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.691 [2024-04-15 16:14:29.857428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:41.691 [2024-04-15 16:14:29.857450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:83008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.691 [2024-04-15 16:14:29.857464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:41.691 [2024-04-15 16:14:29.857486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:83016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.691 [2024-04-15 16:14:29.857501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:41.691 [2024-04-15 16:14:29.857524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:83024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.691 [2024-04-15 16:14:29.857539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:41.691 [2024-04-15 16:14:29.857561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:83032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.691 [2024-04-15 16:14:29.857595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:41.691 [2024-04-15 16:14:29.857635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:83040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.691 [2024-04-15 16:14:29.857652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:41.691 [2024-04-15 16:14:29.857674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:83048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.691 [2024-04-15 16:14:29.857690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.691 [2024-04-15 16:14:29.857713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:83056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.691 [2024-04-15 16:14:29.857728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:41.691 [2024-04-15 16:14:29.858497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:83064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.691 [2024-04-15 16:14:29.858523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:41.691 [2024-04-15 16:14:29.858556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:83072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.691 [2024-04-15 16:14:29.858572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:41.691 [2024-04-15 16:14:29.858613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:83080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.691 [2024-04-15 16:14:29.858630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:41.691 [2024-04-15 16:14:29.858660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:83088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.691 [2024-04-15 16:14:29.858676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:41.691 [2024-04-15 16:14:29.858705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:83096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.691 [2024-04-15 16:14:29.858721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:41.691 [2024-04-15 16:14:29.858750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:83552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.691 [2024-04-15 16:14:29.858766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:41.691 [2024-04-15 16:14:29.858795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:83560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.691 [2024-04-15 16:14:29.858811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:41.691 [2024-04-15 16:14:29.858843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.691 [2024-04-15 16:14:29.858858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:41.691 [2024-04-15 16:14:29.858887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:83576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.691 [2024-04-15 16:14:29.858903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:41.691 [2024-04-15 16:14:29.858933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:83584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.691 [2024-04-15 16:14:29.858948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:41.691 [2024-04-15 16:14:29.858977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:83592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.691 [2024-04-15 16:14:29.858994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:41.691 [2024-04-15 16:14:29.859026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:83600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.691 [2024-04-15 16:14:29.859042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:41.691 [2024-04-15 16:14:29.859093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:83608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.691 [2024-04-15 16:14:29.859110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:41.691 [2024-04-15 16:14:36.907570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:10408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.691 [2024-04-15 16:14:36.907669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:41.691 [2024-04-15 16:14:36.907736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:10416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.691 [2024-04-15 16:14:36.907759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:41.691 [2024-04-15 16:14:36.907787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:10424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.691 [2024-04-15 16:14:36.907807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.691 [2024-04-15 16:14:36.907834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:10432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.691 [2024-04-15 16:14:36.907853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.691 [2024-04-15 16:14:36.907879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:10440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.691 [2024-04-15 16:14:36.907898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:41.691 [2024-04-15 16:14:36.907924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.691 [2024-04-15 16:14:36.907944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:41.691 [2024-04-15 16:14:36.907970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:10456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.691 [2024-04-15 16:14:36.907990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:41.691 [2024-04-15 16:14:36.908016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:10464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.691 [2024-04-15 16:14:36.908035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:41.691 [2024-04-15 16:14:36.908061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:9896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.691 [2024-04-15 16:14:36.908082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:41.691 [2024-04-15 16:14:36.908110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.691 [2024-04-15 16:14:36.908130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:41.691 [2024-04-15 16:14:36.908156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.692 [2024-04-15 16:14:36.908175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:41.692 [2024-04-15 16:14:36.908203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.692 [2024-04-15 16:14:36.908244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:41.692 [2024-04-15 16:14:36.908270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.692 [2024-04-15 16:14:36.908288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:41.692 [2024-04-15 16:14:36.908314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.692 [2024-04-15 16:14:36.908332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:41.692 [2024-04-15 16:14:36.908358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.692 [2024-04-15 16:14:36.908380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:41.692 [2024-04-15 16:14:36.908410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.692 [2024-04-15 16:14:36.908434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:41.692 [2024-04-15 16:14:36.908462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.692 [2024-04-15 16:14:36.908484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:41.692 [2024-04-15 16:14:36.908533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:9968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.692 [2024-04-15 16:14:36.908551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:41.692 [2024-04-15 16:14:36.908576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.692 [2024-04-15 16:14:36.908608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:41.692 [2024-04-15 16:14:36.908634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:9984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.692 [2024-04-15 16:14:36.908651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:41.692 [2024-04-15 16:14:36.908676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.692 [2024-04-15 16:14:36.908694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:41.692 [2024-04-15 16:14:36.908719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:10000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.692 [2024-04-15 16:14:36.908737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:41.692 [2024-04-15 16:14:36.908762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.692 [2024-04-15 16:14:36.908779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:41.692 [2024-04-15 16:14:36.908804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:10016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.692 [2024-04-15 16:14:36.908833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:41.692 [2024-04-15 16:14:36.908858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:10024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.692 [2024-04-15 16:14:36.908876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:41.692 [2024-04-15 16:14:36.908901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:10032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.692 [2024-04-15 16:14:36.908919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:41.692 [2024-04-15 16:14:36.908944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:10040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.692 [2024-04-15 16:14:36.908962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:41.692 [2024-04-15 16:14:36.908986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:10048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.692 [2024-04-15 16:14:36.909004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:41.692 [2024-04-15 16:14:36.909029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:10056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.692 [2024-04-15 16:14:36.909047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:41.692 [2024-04-15 16:14:36.909074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.692 [2024-04-15 16:14:36.909098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:41.692 [2024-04-15 16:14:36.909130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:10072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.692 [2024-04-15 16:14:36.909154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:41.692 [2024-04-15 16:14:36.909181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:10080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.692 [2024-04-15 16:14:36.909199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:41.692 [2024-04-15 16:14:36.909242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:10472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.692 [2024-04-15 16:14:36.909262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:41.692 [2024-04-15 16:14:36.909287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:10480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.692 [2024-04-15 16:14:36.909305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:41.692 [2024-04-15 16:14:36.909330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.692 [2024-04-15 16:14:36.909348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:41.692 [2024-04-15 16:14:36.909373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:10496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.692 [2024-04-15 16:14:36.909391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.692 [2024-04-15 16:14:36.909425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.692 [2024-04-15 16:14:36.909444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:41.692 [2024-04-15 16:14:36.909469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.692 [2024-04-15 16:14:36.909487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:41.692 [2024-04-15 16:14:36.909512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:10520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.692 [2024-04-15 16:14:36.909530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:41.692 [2024-04-15 16:14:36.909555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:10528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.692 [2024-04-15 16:14:36.909602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:41.692 [2024-04-15 16:14:36.909629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:10536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.692 [2024-04-15 16:14:36.909647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:41.692 [2024-04-15 16:14:36.909671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:10544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.692 [2024-04-15 16:14:36.909690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:41.692 [2024-04-15 16:14:36.909715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.692 [2024-04-15 16:14:36.909734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:41.692 [2024-04-15 16:14:36.909759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:10560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.692 [2024-04-15 16:14:36.909777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:41.692 [2024-04-15 16:14:36.909802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.692 [2024-04-15 16:14:36.909820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:41.692 [2024-04-15 16:14:36.909844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:10576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.692 [2024-04-15 16:14:36.909863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:41.692 [2024-04-15 16:14:36.909888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:10584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.692 [2024-04-15 16:14:36.909907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:41.692 [2024-04-15 16:14:36.909933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:10592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.692 [2024-04-15 16:14:36.909951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:41.692 [2024-04-15 16:14:36.909983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:10088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.692 [2024-04-15 16:14:36.910002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:41.692 [2024-04-15 16:14:36.910027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.692 [2024-04-15 16:14:36.910046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:41.693 [2024-04-15 16:14:36.910071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.693 [2024-04-15 16:14:36.910094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:41.693 [2024-04-15 16:14:36.910123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:10112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.693 [2024-04-15 16:14:36.910139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:41.693 [2024-04-15 16:14:36.910162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:10120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.693 [2024-04-15 16:14:36.910177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:41.693 [2024-04-15 16:14:36.910199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:10128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.693 [2024-04-15 16:14:36.910214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:41.693 [2024-04-15 16:14:36.910236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:10136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.693 [2024-04-15 16:14:36.910252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:41.693 [2024-04-15 16:14:36.910273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:10144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.693 [2024-04-15 16:14:36.910289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:41.693 [2024-04-15 16:14:36.910310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:10600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.693 [2024-04-15 16:14:36.910326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:41.693 [2024-04-15 16:14:36.910348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:10608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.693 [2024-04-15 16:14:36.910363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:41.693 [2024-04-15 16:14:36.910386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.693 [2024-04-15 16:14:36.910401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:41.693 [2024-04-15 16:14:36.910422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.693 [2024-04-15 16:14:36.910439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:41.693 [2024-04-15 16:14:36.910466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.693 [2024-04-15 16:14:36.910482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:41.693 [2024-04-15 16:14:36.910504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.693 [2024-04-15 16:14:36.910519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:41.693 [2024-04-15 16:14:36.910542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.693 [2024-04-15 16:14:36.910558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:41.693 [2024-04-15 16:14:36.910591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:10656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.693 [2024-04-15 16:14:36.910607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:41.693 [2024-04-15 16:14:36.910629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.693 [2024-04-15 16:14:36.910660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:41.693 [2024-04-15 16:14:36.910683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:10672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.693 [2024-04-15 16:14:36.910699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:41.693 [2024-04-15 16:14:36.910722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.693 [2024-04-15 16:14:36.910738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:41.693 [2024-04-15 16:14:36.910761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:10688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.693 [2024-04-15 16:14:36.910776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.693 [2024-04-15 16:14:36.910799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.693 [2024-04-15 16:14:36.910815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:41.693 [2024-04-15 16:14:36.910837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:10704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.693 [2024-04-15 16:14:36.910853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:41.693 [2024-04-15 16:14:36.910875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.693 [2024-04-15 16:14:36.910891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:41.693 [2024-04-15 16:14:36.910913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.693 [2024-04-15 16:14:36.910929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:41.693 [2024-04-15 16:14:36.910951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:10152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.693 [2024-04-15 16:14:36.910973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:41.693 [2024-04-15 16:14:36.911007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.693 [2024-04-15 16:14:36.911024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:41.693 [2024-04-15 16:14:36.911046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:10168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.693 [2024-04-15 16:14:36.911061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:41.693 [2024-04-15 16:14:36.911086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:10176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.693 [2024-04-15 16:14:36.911112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:41.693 [2024-04-15 16:14:36.911137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.693 [2024-04-15 16:14:36.911153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:41.693 [2024-04-15 16:14:36.911176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:10192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.693 [2024-04-15 16:14:36.911191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:41.693 [2024-04-15 16:14:36.911213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:10200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.693 [2024-04-15 16:14:36.911229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:41.693 [2024-04-15 16:14:36.911252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:10208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.693 [2024-04-15 16:14:36.911267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:41.693 [2024-04-15 16:14:36.911293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:10728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.693 [2024-04-15 16:14:36.911309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:41.693 [2024-04-15 16:14:36.911331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.693 [2024-04-15 16:14:36.911346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:41.693 [2024-04-15 16:14:36.911368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:10744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.693 [2024-04-15 16:14:36.911384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:41.693 [2024-04-15 16:14:36.911405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.694 [2024-04-15 16:14:36.911420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:41.694 [2024-04-15 16:14:36.911442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:10760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.694 [2024-04-15 16:14:36.911457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:41.694 [2024-04-15 16:14:36.911485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:10768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.694 [2024-04-15 16:14:36.911501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:41.694 [2024-04-15 16:14:36.911523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:10776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.694 [2024-04-15 16:14:36.911538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:41.694 [2024-04-15 16:14:36.911560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.694 [2024-04-15 16:14:36.911575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:41.694 [2024-04-15 16:14:36.911597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.694 [2024-04-15 16:14:36.911623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:41.694 [2024-04-15 16:14:36.911645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:10800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.694 [2024-04-15 16:14:36.911661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:41.694 [2024-04-15 16:14:36.911683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:10808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.694 [2024-04-15 16:14:36.911698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:41.694 [2024-04-15 16:14:36.911720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:10816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.694 [2024-04-15 16:14:36.911735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:41.694 [2024-04-15 16:14:36.911757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.694 [2024-04-15 16:14:36.911772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:41.694 [2024-04-15 16:14:36.911794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:10832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.694 [2024-04-15 16:14:36.911810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:41.694 [2024-04-15 16:14:36.911832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.694 [2024-04-15 16:14:36.911848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:41.694 [2024-04-15 16:14:36.911869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:10848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.694 [2024-04-15 16:14:36.911885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:41.694 [2024-04-15 16:14:36.911907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.694 [2024-04-15 16:14:36.911922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:41.694 [2024-04-15 16:14:36.911949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:10224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.694 [2024-04-15 16:14:36.911965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:41.694 [2024-04-15 16:14:36.911988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.694 [2024-04-15 16:14:36.912003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:41.694 [2024-04-15 16:14:36.912025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:10240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.694 [2024-04-15 16:14:36.912040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.694 [2024-04-15 16:14:36.912062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:10248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.694 [2024-04-15 16:14:36.912081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:41.694 [2024-04-15 16:14:36.912111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:10256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.694 [2024-04-15 16:14:36.912127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:41.694 [2024-04-15 16:14:36.912149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.694 [2024-04-15 16:14:36.912165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:41.694 [2024-04-15 16:14:36.912187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.694 [2024-04-15 16:14:36.912201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:41.694 [2024-04-15 16:14:36.912224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:10280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.694 [2024-04-15 16:14:36.912240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:41.694 [2024-04-15 16:14:36.912262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:10288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.694 [2024-04-15 16:14:36.912277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:41.694 [2024-04-15 16:14:36.912299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:10296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.694 [2024-04-15 16:14:36.912315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:41.694 [2024-04-15 16:14:36.912336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:10304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.694 [2024-04-15 16:14:36.912352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:41.694 [2024-04-15 16:14:36.912373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.694 [2024-04-15 16:14:36.912388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:41.694 [2024-04-15 16:14:36.912416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.694 [2024-04-15 16:14:36.912432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:41.694 [2024-04-15 16:14:36.912457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:10328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.694 [2024-04-15 16:14:36.912472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:41.694 [2024-04-15 16:14:36.912494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:10336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.694 [2024-04-15 16:14:36.912509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:41.694 [2024-04-15 16:14:36.912531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.694 [2024-04-15 16:14:36.912549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:41.694 [2024-04-15 16:14:36.912571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:10352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.694 [2024-04-15 16:14:36.912596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:41.694 [2024-04-15 16:14:36.912634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.694 [2024-04-15 16:14:36.912656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:41.694 [2024-04-15 16:14:36.913517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.694 [2024-04-15 16:14:36.913552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:41.694 [2024-04-15 16:14:36.913617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:10376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.694 [2024-04-15 16:14:36.913638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:41.694 [2024-04-15 16:14:36.913670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.694 [2024-04-15 16:14:36.913689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:41.694 [2024-04-15 16:14:36.913721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:10392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.694 [2024-04-15 16:14:36.913739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:41.694 [2024-04-15 16:14:36.913771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:10400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.694 [2024-04-15 16:14:36.913789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:41.694 [2024-04-15 16:14:36.913821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.694 [2024-04-15 16:14:36.913839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:41.694 [2024-04-15 16:14:36.913870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:10864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.694 [2024-04-15 16:14:36.913901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:41.695 [2024-04-15 16:14:36.913934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:10872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.695 [2024-04-15 16:14:36.913952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:41.695 [2024-04-15 16:14:36.913983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.695 [2024-04-15 16:14:36.914002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:41.695 [2024-04-15 16:14:36.914034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:10888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.695 [2024-04-15 16:14:36.914052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:41.695 [2024-04-15 16:14:36.914086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:10896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.695 [2024-04-15 16:14:36.914108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:41.695 [2024-04-15 16:14:36.914140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.695 [2024-04-15 16:14:36.914155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:41.695 [2024-04-15 16:14:36.914199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.695 [2024-04-15 16:14:36.914216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:41.695 [2024-04-15 16:14:50.427240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:41.695 [2024-04-15 16:14:50.427293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.695 [2024-04-15 16:14:50.427312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:41.695 [2024-04-15 16:14:50.427328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.695 [2024-04-15 16:14:50.427344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:41.695 [2024-04-15 16:14:50.427360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.695 [2024-04-15 16:14:50.427375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:41.695 [2024-04-15 16:14:50.427390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.695 [2024-04-15 16:14:50.427405] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20322e0 is same with the state(5) to be set 00:22:41.695 [2024-04-15 16:14:50.429270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:81952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.695 [2024-04-15 16:14:50.429292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.695 [2024-04-15 16:14:50.429316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:81960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.695 [2024-04-15 16:14:50.429350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.695 [2024-04-15 16:14:50.429368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:81968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.695 [2024-04-15 16:14:50.429383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.695 [2024-04-15 16:14:50.429400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:81976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.695 [2024-04-15 16:14:50.429415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.695 [2024-04-15 16:14:50.429432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:81984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.695 [2024-04-15 16:14:50.429447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.695 [2024-04-15 16:14:50.429464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:81992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.695 [2024-04-15 16:14:50.429481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.695 [2024-04-15 16:14:50.429497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:82000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.695 [2024-04-15 16:14:50.429513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.695 [2024-04-15 16:14:50.429530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:82008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.695 [2024-04-15 16:14:50.429545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.695 [2024-04-15 16:14:50.429562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.695 [2024-04-15 16:14:50.429586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.695 [2024-04-15 16:14:50.429613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:82024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.695 [2024-04-15 16:14:50.429628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.695 [2024-04-15 16:14:50.429645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:82032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.695 [2024-04-15 16:14:50.429660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.695 [2024-04-15 16:14:50.429677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.695 [2024-04-15 16:14:50.429692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.695 [2024-04-15 16:14:50.429709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:82048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.695 [2024-04-15 16:14:50.429724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.695 [2024-04-15 16:14:50.429741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:82056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.695 [2024-04-15 16:14:50.429756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.695 [2024-04-15 16:14:50.429781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:82064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.695 [2024-04-15 16:14:50.429796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.695 [2024-04-15 16:14:50.429813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:82072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.695 [2024-04-15 16:14:50.429829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.695 [2024-04-15 16:14:50.429846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.695 [2024-04-15 16:14:50.429863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.695 [2024-04-15 16:14:50.429880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:82088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.695 [2024-04-15 16:14:50.429896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.695 [2024-04-15 16:14:50.429913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:82096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.695 [2024-04-15 16:14:50.429928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.695 [2024-04-15 16:14:50.429945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:82104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.695 [2024-04-15 16:14:50.429960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.695 [2024-04-15 16:14:50.429977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:82112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.695 [2024-04-15 16:14:50.429992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.695 [2024-04-15 16:14:50.430009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:82120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.695 [2024-04-15 16:14:50.430024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.695 [2024-04-15 16:14:50.430052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:82128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.695 [2024-04-15 16:14:50.430067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.695 [2024-04-15 16:14:50.430084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:82136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.695 [2024-04-15 16:14:50.430099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.695 [2024-04-15 16:14:50.430116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:82592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.695 [2024-04-15 16:14:50.430131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.695 [2024-04-15 16:14:50.430148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:82600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.695 [2024-04-15 16:14:50.430163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.695 [2024-04-15 16:14:50.430179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:82608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.695 [2024-04-15 16:14:50.430200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.695 [2024-04-15 16:14:50.430216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:82616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.695 [2024-04-15 16:14:50.430231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.695 [2024-04-15 16:14:50.430248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:82624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.695 [2024-04-15 16:14:50.430262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.695 [2024-04-15 16:14:50.430279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:82632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.696 [2024-04-15 16:14:50.430293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.696 [2024-04-15 16:14:50.430310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:82640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.696 [2024-04-15 16:14:50.430324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.696 [2024-04-15 16:14:50.430340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:82648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.696 [2024-04-15 16:14:50.430356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.696 [2024-04-15 16:14:50.430373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:82144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.696 [2024-04-15 16:14:50.430388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.696 [2024-04-15 16:14:50.430404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:82152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.696 [2024-04-15 16:14:50.430419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.696 [2024-04-15 16:14:50.430435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:82160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.696 [2024-04-15 16:14:50.430451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.696 [2024-04-15 16:14:50.430467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:82168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.696 [2024-04-15 16:14:50.430481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.696 [2024-04-15 16:14:50.430498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:82176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.696 [2024-04-15 16:14:50.430513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.696 [2024-04-15 16:14:50.430529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.696 [2024-04-15 16:14:50.430544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.696 [2024-04-15 16:14:50.430560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:82192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.696 [2024-04-15 16:14:50.430575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.696 [2024-04-15 16:14:50.430604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:82200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.696 [2024-04-15 16:14:50.430620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.696 [2024-04-15 16:14:50.430636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:82656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.696 [2024-04-15 16:14:50.430651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.696 [2024-04-15 16:14:50.430667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:82664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.696 [2024-04-15 16:14:50.430682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.696 [2024-04-15 16:14:50.430698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:82672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.696 [2024-04-15 16:14:50.430713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.696 [2024-04-15 16:14:50.430730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:82680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.696 [2024-04-15 16:14:50.430744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.696 [2024-04-15 16:14:50.430760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:82688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.696 [2024-04-15 16:14:50.430775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.696 [2024-04-15 16:14:50.430791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.696 [2024-04-15 16:14:50.430806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.696 [2024-04-15 16:14:50.430822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:82704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.696 [2024-04-15 16:14:50.430837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.696 [2024-04-15 16:14:50.430853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:82712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.696 [2024-04-15 16:14:50.430868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.696 [2024-04-15 16:14:50.430884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:82720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.696 [2024-04-15 16:14:50.430899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.696 [2024-04-15 16:14:50.430916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:82728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.696 [2024-04-15 16:14:50.430931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.696 [2024-04-15 16:14:50.430947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:82736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.696 [2024-04-15 16:14:50.430961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.696 [2024-04-15 16:14:50.430977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:82744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.696 [2024-04-15 16:14:50.430992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.696 [2024-04-15 16:14:50.431015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:82752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.696 [2024-04-15 16:14:50.431030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.696 [2024-04-15 16:14:50.431047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:82760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.696 [2024-04-15 16:14:50.431062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.696 [2024-04-15 16:14:50.431078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:82768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.696 [2024-04-15 16:14:50.431093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.696 [2024-04-15 16:14:50.431109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:82776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.696 [2024-04-15 16:14:50.431124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.696 [2024-04-15 16:14:50.431140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:82784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.696 [2024-04-15 16:14:50.431155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.696 [2024-04-15 16:14:50.431171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:82792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.696 [2024-04-15 16:14:50.431185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.696 [2024-04-15 16:14:50.431202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:82800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.696 [2024-04-15 16:14:50.431216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.696 [2024-04-15 16:14:50.431232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:82808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.696 [2024-04-15 16:14:50.431247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.696 [2024-04-15 16:14:50.431263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:82816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.696 [2024-04-15 16:14:50.431278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.696 [2024-04-15 16:14:50.431294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.696 [2024-04-15 16:14:50.431309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.696 [2024-04-15 16:14:50.431325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:82832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.696 [2024-04-15 16:14:50.431339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.696 [2024-04-15 16:14:50.431356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:82840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.696 [2024-04-15 16:14:50.431370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.696 [2024-04-15 16:14:50.431386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:82208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.696 [2024-04-15 16:14:50.431406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.696 [2024-04-15 16:14:50.431423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:82216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.696 [2024-04-15 16:14:50.431437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.696 [2024-04-15 16:14:50.431454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:82224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.696 [2024-04-15 16:14:50.431469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.696 [2024-04-15 16:14:50.431485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:82232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.696 [2024-04-15 16:14:50.431500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.696 [2024-04-15 16:14:50.431516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:82240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.696 [2024-04-15 16:14:50.431531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.696 [2024-04-15 16:14:50.431547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:82248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.697 [2024-04-15 16:14:50.431562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.697 [2024-04-15 16:14:50.431586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:82256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.697 [2024-04-15 16:14:50.431601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.697 [2024-04-15 16:14:50.431619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:82264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.697 [2024-04-15 16:14:50.431655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.697 [2024-04-15 16:14:50.431672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:82272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.697 [2024-04-15 16:14:50.431687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.697 [2024-04-15 16:14:50.431704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:82280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.697 [2024-04-15 16:14:50.431719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.697 [2024-04-15 16:14:50.431735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:82288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.697 [2024-04-15 16:14:50.431753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.697 [2024-04-15 16:14:50.431770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:82296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.697 [2024-04-15 16:14:50.431785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.697 [2024-04-15 16:14:50.431801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:82304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.697 [2024-04-15 16:14:50.431816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.697 [2024-04-15 16:14:50.431838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:82312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.697 [2024-04-15 16:14:50.431853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.697 [2024-04-15 16:14:50.431869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:82320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.697 [2024-04-15 16:14:50.431884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.697 [2024-04-15 16:14:50.431901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:82328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.697 [2024-04-15 16:14:50.431915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.697 [2024-04-15 16:14:50.431932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:82848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.697 [2024-04-15 16:14:50.431946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.697 [2024-04-15 16:14:50.431963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:82856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.697 [2024-04-15 16:14:50.431977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.697 [2024-04-15 16:14:50.431993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:82864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.697 [2024-04-15 16:14:50.432008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.697 [2024-04-15 16:14:50.432025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:82872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.697 [2024-04-15 16:14:50.432040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.697 [2024-04-15 16:14:50.432056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:82880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.697 [2024-04-15 16:14:50.432070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.697 [2024-04-15 16:14:50.432087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:82888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.697 [2024-04-15 16:14:50.432101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.697 [2024-04-15 16:14:50.432118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:82896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.697 [2024-04-15 16:14:50.432133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.697 [2024-04-15 16:14:50.432149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:82904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.697 [2024-04-15 16:14:50.432164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.697 [2024-04-15 16:14:50.432180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:82336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.697 [2024-04-15 16:14:50.432194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.697 [2024-04-15 16:14:50.432211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.697 [2024-04-15 16:14:50.432230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.697 [2024-04-15 16:14:50.432247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:82352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.697 [2024-04-15 16:14:50.432263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.697 [2024-04-15 16:14:50.432279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:82360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.697 [2024-04-15 16:14:50.432294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.697 [2024-04-15 16:14:50.432310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:82368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.697 [2024-04-15 16:14:50.432325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.697 [2024-04-15 16:14:50.432341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:82376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.697 [2024-04-15 16:14:50.432358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.697 [2024-04-15 16:14:50.432374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:82384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.697 [2024-04-15 16:14:50.432389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.697 [2024-04-15 16:14:50.432405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:82392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.697 [2024-04-15 16:14:50.432419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.697 [2024-04-15 16:14:50.432436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:82400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.697 [2024-04-15 16:14:50.432450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.697 [2024-04-15 16:14:50.432466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:82408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.697 [2024-04-15 16:14:50.432481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.697 [2024-04-15 16:14:50.432498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:82416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.697 [2024-04-15 16:14:50.432513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.697 [2024-04-15 16:14:50.432529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:82424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.697 [2024-04-15 16:14:50.432544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.697 [2024-04-15 16:14:50.432560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:82432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.697 [2024-04-15 16:14:50.432584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.697 [2024-04-15 16:14:50.432601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:82440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.697 [2024-04-15 16:14:50.432615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.697 [2024-04-15 16:14:50.432632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:82448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.697 [2024-04-15 16:14:50.432651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.697 [2024-04-15 16:14:50.432668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.697 [2024-04-15 16:14:50.432682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.697 [2024-04-15 16:14:50.432699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:82912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.697 [2024-04-15 16:14:50.432713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.697 [2024-04-15 16:14:50.432729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.697 [2024-04-15 16:14:50.432744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.697 [2024-04-15 16:14:50.432760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:82928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.697 [2024-04-15 16:14:50.432776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.697 [2024-04-15 16:14:50.432792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:82936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.697 [2024-04-15 16:14:50.432807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.697 [2024-04-15 16:14:50.432824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:82944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.697 [2024-04-15 16:14:50.432839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.697 [2024-04-15 16:14:50.432855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:82952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.698 [2024-04-15 16:14:50.432871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.698 [2024-04-15 16:14:50.432887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:82960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.698 [2024-04-15 16:14:50.432902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.698 [2024-04-15 16:14:50.432918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:82968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.698 [2024-04-15 16:14:50.432932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.698 [2024-04-15 16:14:50.432948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.698 [2024-04-15 16:14:50.432963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.698 [2024-04-15 16:14:50.432980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:82472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.698 [2024-04-15 16:14:50.432994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.698 [2024-04-15 16:14:50.433011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:82480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.698 [2024-04-15 16:14:50.433025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.698 [2024-04-15 16:14:50.433046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:82488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.698 [2024-04-15 16:14:50.433061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.698 [2024-04-15 16:14:50.433077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:82496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.698 [2024-04-15 16:14:50.433092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.698 [2024-04-15 16:14:50.433108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:82504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.698 [2024-04-15 16:14:50.433123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.698 [2024-04-15 16:14:50.433139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:82512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.698 [2024-04-15 16:14:50.433154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.698 [2024-04-15 16:14:50.433170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:82520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.698 [2024-04-15 16:14:50.433185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.698 [2024-04-15 16:14:50.433201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.698 [2024-04-15 16:14:50.433215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.698 [2024-04-15 16:14:50.433232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:82536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.698 [2024-04-15 16:14:50.433247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.698 [2024-04-15 16:14:50.433263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:82544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.698 [2024-04-15 16:14:50.433279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.698 [2024-04-15 16:14:50.433295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:82552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.698 [2024-04-15 16:14:50.433309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.698 [2024-04-15 16:14:50.433326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.698 [2024-04-15 16:14:50.433340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.698 [2024-04-15 16:14:50.433357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:82568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.698 [2024-04-15 16:14:50.433374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.698 [2024-04-15 16:14:50.433390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:82576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.698 [2024-04-15 16:14:50.433404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.698 [2024-04-15 16:14:50.433420] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2031460 is same with the state(5) to be set 00:22:41.698 [2024-04-15 16:14:50.433442] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:41.698 [2024-04-15 16:14:50.433454] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:41.698 [2024-04-15 16:14:50.433465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82584 len:8 PRP1 0x0 PRP2 0x0 00:22:41.698 [2024-04-15 16:14:50.433480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.698 [2024-04-15 16:14:50.433539] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2031460 was disconnected and freed. reset controller. 00:22:41.698 [2024-04-15 16:14:50.434606] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:41.698 [2024-04-15 16:14:50.434662] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20322e0 (9): Bad file descriptor 00:22:41.698 [2024-04-15 16:14:50.434946] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:41.698 [2024-04-15 16:14:50.435013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:41.698 [2024-04-15 16:14:50.435069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:41.698 [2024-04-15 16:14:50.435087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20322e0 with addr=10.0.0.2, port=4421 00:22:41.698 [2024-04-15 16:14:50.435103] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20322e0 is same with the state(5) to be set 00:22:41.698 [2024-04-15 16:14:50.435134] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20322e0 (9): Bad file descriptor 00:22:41.698 [2024-04-15 16:14:50.435163] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:41.698 [2024-04-15 16:14:50.435178] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:41.698 [2024-04-15 16:14:50.435195] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:41.698 [2024-04-15 16:14:50.435223] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:41.698 [2024-04-15 16:14:50.435237] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:41.698 [2024-04-15 16:15:00.485001] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:41.698 Received shutdown signal, test time was about 55.729512 seconds 00:22:41.698 00:22:41.698 Latency(us) 00:22:41.698 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:41.698 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:41.698 Verification LBA range: start 0x0 length 0x4000 00:22:41.698 Nvme0n1 : 55.73 7899.09 30.86 0.00 0.00 16178.61 1053.26 7030452.42 00:22:41.698 =================================================================================================================== 00:22:41.698 Total : 7899.09 30.86 0.00 0.00 16178.61 1053.26 7030452.42 00:22:41.698 16:15:10 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:41.698 16:15:11 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:22:41.698 16:15:11 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:41.698 16:15:11 -- host/multipath.sh@125 -- # nvmftestfini 00:22:41.698 16:15:11 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:41.698 16:15:11 -- nvmf/common.sh@117 -- # sync 00:22:41.698 16:15:11 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:41.698 16:15:11 -- nvmf/common.sh@120 -- # set +e 00:22:41.698 16:15:11 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:41.698 16:15:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:41.698 rmmod nvme_tcp 00:22:41.698 rmmod nvme_fabrics 00:22:41.698 16:15:11 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:41.698 16:15:11 -- nvmf/common.sh@124 -- # set -e 00:22:41.698 16:15:11 -- nvmf/common.sh@125 -- # return 0 00:22:41.698 16:15:11 -- nvmf/common.sh@478 -- # '[' -n 91956 ']' 00:22:41.698 16:15:11 -- nvmf/common.sh@479 -- # killprocess 91956 00:22:41.698 16:15:11 -- common/autotest_common.sh@936 -- # '[' -z 91956 ']' 00:22:41.698 16:15:11 -- common/autotest_common.sh@940 -- # kill -0 91956 00:22:41.698 16:15:11 -- common/autotest_common.sh@941 -- # uname 00:22:41.698 16:15:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:41.698 16:15:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 91956 00:22:41.699 16:15:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:41.699 16:15:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:41.699 16:15:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 91956' 00:22:41.699 killing process with pid 91956 00:22:41.699 16:15:11 -- common/autotest_common.sh@955 -- # kill 91956 00:22:41.699 16:15:11 -- common/autotest_common.sh@960 -- # wait 91956 00:22:41.699 16:15:11 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:41.699 16:15:11 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:41.699 16:15:11 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:41.699 16:15:11 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:41.699 16:15:11 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:41.699 16:15:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:41.699 16:15:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:41.699 16:15:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.699 16:15:11 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:41.699 ************************************ 00:22:41.699 END TEST nvmf_multipath 00:22:41.699 ************************************ 00:22:41.699 00:22:41.699 real 1m1.561s 00:22:41.699 user 2m47.478s 00:22:41.699 sys 0m22.245s 00:22:41.699 16:15:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:41.699 16:15:11 -- common/autotest_common.sh@10 -- # set +x 00:22:41.699 16:15:11 -- nvmf/nvmf.sh@115 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:22:41.699 16:15:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:41.699 16:15:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:41.699 16:15:11 -- common/autotest_common.sh@10 -- # set +x 00:22:41.699 ************************************ 00:22:41.699 START TEST nvmf_timeout 00:22:41.699 ************************************ 00:22:41.699 16:15:11 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:22:41.957 * Looking for test storage... 00:22:41.957 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:41.957 16:15:11 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:41.957 16:15:11 -- nvmf/common.sh@7 -- # uname -s 00:22:41.957 16:15:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:41.957 16:15:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:41.957 16:15:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:41.957 16:15:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:41.957 16:15:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:41.957 16:15:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:41.957 16:15:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:41.957 16:15:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:41.957 16:15:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:41.957 16:15:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:41.957 16:15:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5cf30adc-e0ee-4255-9853-178d7d30983a 00:22:41.957 16:15:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=5cf30adc-e0ee-4255-9853-178d7d30983a 00:22:41.957 16:15:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:41.957 16:15:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:41.957 16:15:11 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:41.957 16:15:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:41.957 16:15:11 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:41.957 16:15:11 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:41.957 16:15:11 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:41.957 16:15:11 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:41.957 16:15:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.957 16:15:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.958 16:15:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.958 16:15:11 -- paths/export.sh@5 -- # export PATH 00:22:41.958 16:15:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.958 16:15:11 -- nvmf/common.sh@47 -- # : 0 00:22:41.958 16:15:11 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:41.958 16:15:11 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:41.958 16:15:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:41.958 16:15:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:41.958 16:15:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:41.958 16:15:11 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:41.958 16:15:11 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:41.958 16:15:11 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:41.958 16:15:11 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:41.958 16:15:11 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:41.958 16:15:11 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:41.958 16:15:11 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:22:41.958 16:15:11 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:41.958 16:15:11 -- host/timeout.sh@19 -- # nvmftestinit 00:22:41.958 16:15:11 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:41.958 16:15:11 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:41.958 16:15:11 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:41.958 16:15:11 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:41.958 16:15:11 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:41.958 16:15:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:41.958 16:15:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:41.958 16:15:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.958 16:15:11 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:22:41.958 16:15:11 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:22:41.958 16:15:11 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:22:41.958 16:15:11 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:22:41.958 16:15:11 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:22:41.958 16:15:11 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:22:41.958 16:15:11 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:41.958 16:15:11 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:41.958 16:15:11 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:41.958 16:15:11 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:41.958 16:15:11 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:41.958 16:15:11 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:41.958 16:15:11 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:41.958 16:15:11 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:41.958 16:15:11 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:41.958 16:15:11 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:41.958 16:15:11 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:41.958 16:15:11 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:41.958 16:15:11 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:41.958 16:15:11 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:41.958 Cannot find device "nvmf_tgt_br" 00:22:41.958 16:15:11 -- nvmf/common.sh@155 -- # true 00:22:41.958 16:15:11 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:41.958 Cannot find device "nvmf_tgt_br2" 00:22:41.958 16:15:11 -- nvmf/common.sh@156 -- # true 00:22:41.958 16:15:11 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:41.958 16:15:11 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:41.958 Cannot find device "nvmf_tgt_br" 00:22:41.958 16:15:11 -- nvmf/common.sh@158 -- # true 00:22:41.958 16:15:11 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:41.958 Cannot find device "nvmf_tgt_br2" 00:22:41.958 16:15:11 -- nvmf/common.sh@159 -- # true 00:22:41.958 16:15:11 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:41.958 16:15:11 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:41.958 16:15:11 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:41.958 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:41.958 16:15:11 -- nvmf/common.sh@162 -- # true 00:22:41.958 16:15:11 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:41.958 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:41.958 16:15:11 -- nvmf/common.sh@163 -- # true 00:22:41.958 16:15:11 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:41.958 16:15:11 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:41.958 16:15:11 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:41.958 16:15:11 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:41.958 16:15:11 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:42.216 16:15:11 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:42.216 16:15:11 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:42.216 16:15:11 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:42.216 16:15:11 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:42.216 16:15:11 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:42.216 16:15:11 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:42.216 16:15:11 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:42.216 16:15:11 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:42.216 16:15:11 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:42.216 16:15:12 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:42.216 16:15:12 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:42.216 16:15:12 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:42.216 16:15:12 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:42.216 16:15:12 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:42.216 16:15:12 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:42.216 16:15:12 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:42.216 16:15:12 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:42.216 16:15:12 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:42.216 16:15:12 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:42.216 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:42.216 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:22:42.216 00:22:42.216 --- 10.0.0.2 ping statistics --- 00:22:42.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.216 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:22:42.216 16:15:12 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:42.216 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:42.216 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:22:42.216 00:22:42.216 --- 10.0.0.3 ping statistics --- 00:22:42.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.216 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:22:42.216 16:15:12 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:42.217 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:42.217 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:22:42.217 00:22:42.217 --- 10.0.0.1 ping statistics --- 00:22:42.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.217 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:22:42.217 16:15:12 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:42.217 16:15:12 -- nvmf/common.sh@422 -- # return 0 00:22:42.217 16:15:12 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:42.217 16:15:12 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:42.217 16:15:12 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:42.217 16:15:12 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:42.217 16:15:12 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:42.217 16:15:12 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:42.217 16:15:12 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:42.217 16:15:12 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:22:42.217 16:15:12 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:42.217 16:15:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:42.217 16:15:12 -- common/autotest_common.sh@10 -- # set +x 00:22:42.217 16:15:12 -- nvmf/common.sh@470 -- # nvmfpid=93127 00:22:42.217 16:15:12 -- nvmf/common.sh@471 -- # waitforlisten 93127 00:22:42.217 16:15:12 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:42.217 16:15:12 -- common/autotest_common.sh@817 -- # '[' -z 93127 ']' 00:22:42.217 16:15:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:42.217 16:15:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:42.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:42.217 16:15:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:42.217 16:15:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:42.217 16:15:12 -- common/autotest_common.sh@10 -- # set +x 00:22:42.217 [2024-04-15 16:15:12.167961] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:22:42.217 [2024-04-15 16:15:12.168060] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:42.474 [2024-04-15 16:15:12.309990] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:42.474 [2024-04-15 16:15:12.360149] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:42.474 [2024-04-15 16:15:12.360207] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:42.474 [2024-04-15 16:15:12.360218] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:42.474 [2024-04-15 16:15:12.360228] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:42.474 [2024-04-15 16:15:12.360236] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:42.474 [2024-04-15 16:15:12.360320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:42.474 [2024-04-15 16:15:12.360329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:43.407 16:15:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:43.407 16:15:13 -- common/autotest_common.sh@850 -- # return 0 00:22:43.407 16:15:13 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:43.407 16:15:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:43.407 16:15:13 -- common/autotest_common.sh@10 -- # set +x 00:22:43.407 16:15:13 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:43.407 16:15:13 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:43.407 16:15:13 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:43.676 [2024-04-15 16:15:13.386943] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:43.676 16:15:13 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:43.940 Malloc0 00:22:43.940 16:15:13 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:43.940 16:15:13 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:44.198 16:15:14 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:44.457 [2024-04-15 16:15:14.394939] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:44.457 16:15:14 -- host/timeout.sh@32 -- # bdevperf_pid=93182 00:22:44.457 16:15:14 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:22:44.457 16:15:14 -- host/timeout.sh@34 -- # waitforlisten 93182 /var/tmp/bdevperf.sock 00:22:44.457 16:15:14 -- common/autotest_common.sh@817 -- # '[' -z 93182 ']' 00:22:44.457 16:15:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:44.457 16:15:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:44.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:44.457 16:15:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:44.457 16:15:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:44.457 16:15:14 -- common/autotest_common.sh@10 -- # set +x 00:22:44.715 [2024-04-15 16:15:14.455317] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:22:44.715 [2024-04-15 16:15:14.455430] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93182 ] 00:22:44.715 [2024-04-15 16:15:14.609772] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.973 [2024-04-15 16:15:14.687657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:44.973 16:15:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:44.973 16:15:14 -- common/autotest_common.sh@850 -- # return 0 00:22:44.973 16:15:14 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:45.230 16:15:15 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:22:45.488 NVMe0n1 00:22:45.746 16:15:15 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:45.746 16:15:15 -- host/timeout.sh@51 -- # rpc_pid=93194 00:22:45.746 16:15:15 -- host/timeout.sh@53 -- # sleep 1 00:22:45.746 Running I/O for 10 seconds... 00:22:46.682 16:15:16 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:46.943 [2024-04-15 16:15:16.661006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:78840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.943 [2024-04-15 16:15:16.661063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.943 [2024-04-15 16:15:16.661084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:78848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.943 [2024-04-15 16:15:16.661093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.943 [2024-04-15 16:15:16.661106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:78856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.943 [2024-04-15 16:15:16.661115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.943 [2024-04-15 16:15:16.661126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:78864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.943 [2024-04-15 16:15:16.661134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.943 [2024-04-15 16:15:16.661145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:78872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.943 [2024-04-15 16:15:16.661154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.943 [2024-04-15 16:15:16.661164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:78880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.943 [2024-04-15 16:15:16.661173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.943 [2024-04-15 16:15:16.661183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.943 [2024-04-15 16:15:16.661191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.943 [2024-04-15 16:15:16.661201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:78896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.943 [2024-04-15 16:15:16.661210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.943 [2024-04-15 16:15:16.661220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:78904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.943 [2024-04-15 16:15:16.661229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.943 [2024-04-15 16:15:16.661255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:78912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.943 [2024-04-15 16:15:16.661264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.943 [2024-04-15 16:15:16.661275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:78920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.943 [2024-04-15 16:15:16.661285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.943 [2024-04-15 16:15:16.661296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:78928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.943 [2024-04-15 16:15:16.661306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.943 [2024-04-15 16:15:16.661317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:78936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.943 [2024-04-15 16:15:16.661326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.943 [2024-04-15 16:15:16.661337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:78944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.943 [2024-04-15 16:15:16.661347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.943 [2024-04-15 16:15:16.661358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:78952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.943 [2024-04-15 16:15:16.661367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.943 [2024-04-15 16:15:16.661378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:78960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.943 [2024-04-15 16:15:16.661389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.943 [2024-04-15 16:15:16.661401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:78456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.943 [2024-04-15 16:15:16.661410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.943 [2024-04-15 16:15:16.661423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:78464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.943 [2024-04-15 16:15:16.661432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.943 [2024-04-15 16:15:16.661443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.943 [2024-04-15 16:15:16.661453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.943 [2024-04-15 16:15:16.661464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:78480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.943 [2024-04-15 16:15:16.661473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.943 [2024-04-15 16:15:16.661485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:78488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.943 [2024-04-15 16:15:16.661494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.943 [2024-04-15 16:15:16.661505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:78496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.943 [2024-04-15 16:15:16.661514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.943 [2024-04-15 16:15:16.661525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:78504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.943 [2024-04-15 16:15:16.661535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.943 [2024-04-15 16:15:16.661546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:78512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.943 [2024-04-15 16:15:16.661556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.943 [2024-04-15 16:15:16.661567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:78968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.943 [2024-04-15 16:15:16.661577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.943 [2024-04-15 16:15:16.661598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:78976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.943 [2024-04-15 16:15:16.661619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.943 [2024-04-15 16:15:16.661650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:78984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.943 [2024-04-15 16:15:16.661663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.943 [2024-04-15 16:15:16.661678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.943 [2024-04-15 16:15:16.661691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.943 [2024-04-15 16:15:16.661706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:79000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.943 [2024-04-15 16:15:16.661718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.943 [2024-04-15 16:15:16.661733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.943 [2024-04-15 16:15:16.661746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.943 [2024-04-15 16:15:16.661762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:79016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.943 [2024-04-15 16:15:16.661775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.943 [2024-04-15 16:15:16.661790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:79024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.943 [2024-04-15 16:15:16.661803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.943 [2024-04-15 16:15:16.661818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:79032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.943 [2024-04-15 16:15:16.661832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.943 [2024-04-15 16:15:16.661853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:79040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.943 [2024-04-15 16:15:16.661867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.943 [2024-04-15 16:15:16.661880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:79048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.943 [2024-04-15 16:15:16.661890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.943 [2024-04-15 16:15:16.661901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.943 [2024-04-15 16:15:16.661912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.943 [2024-04-15 16:15:16.661923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.944 [2024-04-15 16:15:16.661933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.944 [2024-04-15 16:15:16.661945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:79072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.944 [2024-04-15 16:15:16.661955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.944 [2024-04-15 16:15:16.661967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.944 [2024-04-15 16:15:16.661978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.944 [2024-04-15 16:15:16.661990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:79088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.944 [2024-04-15 16:15:16.662000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.944 [2024-04-15 16:15:16.662011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:78520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.944 [2024-04-15 16:15:16.662021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.944 [2024-04-15 16:15:16.662033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:78528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.944 [2024-04-15 16:15:16.662043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.944 [2024-04-15 16:15:16.662055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.944 [2024-04-15 16:15:16.662065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.944 [2024-04-15 16:15:16.662077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.944 [2024-04-15 16:15:16.662087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.944 [2024-04-15 16:15:16.662099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:78552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.944 [2024-04-15 16:15:16.662109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.944 [2024-04-15 16:15:16.662121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.944 [2024-04-15 16:15:16.662131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.944 [2024-04-15 16:15:16.662143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:78568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.944 [2024-04-15 16:15:16.662153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.944 [2024-04-15 16:15:16.662165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.944 [2024-04-15 16:15:16.662175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.944 [2024-04-15 16:15:16.662187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:79096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.944 [2024-04-15 16:15:16.662197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.944 [2024-04-15 16:15:16.662209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:79104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.944 [2024-04-15 16:15:16.662219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.944 [2024-04-15 16:15:16.662231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:79112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.944 [2024-04-15 16:15:16.662241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.944 [2024-04-15 16:15:16.662252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:79120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.944 [2024-04-15 16:15:16.662262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.944 [2024-04-15 16:15:16.662274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:79128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.944 [2024-04-15 16:15:16.662284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.944 [2024-04-15 16:15:16.662296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.944 [2024-04-15 16:15:16.662306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.944 [2024-04-15 16:15:16.662318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:79144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.944 [2024-04-15 16:15:16.662328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.944 [2024-04-15 16:15:16.662340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.944 [2024-04-15 16:15:16.662350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.944 [2024-04-15 16:15:16.662362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:79160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.944 [2024-04-15 16:15:16.662372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.944 [2024-04-15 16:15:16.662384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:79168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.944 [2024-04-15 16:15:16.662394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.944 [2024-04-15 16:15:16.662406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.944 [2024-04-15 16:15:16.662416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.944 [2024-04-15 16:15:16.662427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:79184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.944 [2024-04-15 16:15:16.662437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.944 [2024-04-15 16:15:16.662449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.944 [2024-04-15 16:15:16.662459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.944 [2024-04-15 16:15:16.662471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:79200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.944 [2024-04-15 16:15:16.662481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.944 [2024-04-15 16:15:16.662492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.944 [2024-04-15 16:15:16.662503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.944 [2024-04-15 16:15:16.662515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:79216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.944 [2024-04-15 16:15:16.662526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.944 [2024-04-15 16:15:16.662538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.944 [2024-04-15 16:15:16.662548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.944 [2024-04-15 16:15:16.662560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:79232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.944 [2024-04-15 16:15:16.662570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.944 [2024-04-15 16:15:16.662582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.944 [2024-04-15 16:15:16.662605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.944 [2024-04-15 16:15:16.662618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:79248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.944 [2024-04-15 16:15:16.662628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.944 [2024-04-15 16:15:16.662640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:78584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.944 [2024-04-15 16:15:16.662650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.944 [2024-04-15 16:15:16.662662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:78592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.944 [2024-04-15 16:15:16.662672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.944 [2024-04-15 16:15:16.662684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:78600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.944 [2024-04-15 16:15:16.662695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.944 [2024-04-15 16:15:16.662706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:78608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.944 [2024-04-15 16:15:16.662716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.944 [2024-04-15 16:15:16.662728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:78616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.944 [2024-04-15 16:15:16.662738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.944 [2024-04-15 16:15:16.662750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.944 [2024-04-15 16:15:16.662760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.944 [2024-04-15 16:15:16.662772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:78632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.944 [2024-04-15 16:15:16.662782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.944 [2024-04-15 16:15:16.662794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:78640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.944 [2024-04-15 16:15:16.662804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.944 [2024-04-15 16:15:16.662816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:78648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.944 [2024-04-15 16:15:16.662826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.944 [2024-04-15 16:15:16.662837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:78656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.944 [2024-04-15 16:15:16.662847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.944 [2024-04-15 16:15:16.662859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:78664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.944 [2024-04-15 16:15:16.662869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.944 [2024-04-15 16:15:16.662882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.944 [2024-04-15 16:15:16.662892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.944 [2024-04-15 16:15:16.662904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.944 [2024-04-15 16:15:16.662914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.944 [2024-04-15 16:15:16.662926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:78688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.944 [2024-04-15 16:15:16.662936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.944 [2024-04-15 16:15:16.662958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:78696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.944 [2024-04-15 16:15:16.662968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.944 [2024-04-15 16:15:16.662979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:78704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.944 [2024-04-15 16:15:16.662989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.944 [2024-04-15 16:15:16.663000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:79256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.944 [2024-04-15 16:15:16.663013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.944 [2024-04-15 16:15:16.663024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:79264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.944 [2024-04-15 16:15:16.663033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.944 [2024-04-15 16:15:16.663044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:79272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.944 [2024-04-15 16:15:16.663054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.944 [2024-04-15 16:15:16.663065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.944 [2024-04-15 16:15:16.663074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.944 [2024-04-15 16:15:16.663098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.944 [2024-04-15 16:15:16.663106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.945 [2024-04-15 16:15:16.663116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:79296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.945 [2024-04-15 16:15:16.663125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.945 [2024-04-15 16:15:16.663135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.945 [2024-04-15 16:15:16.663143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.945 [2024-04-15 16:15:16.663170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:79312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.945 [2024-04-15 16:15:16.663179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.945 [2024-04-15 16:15:16.663190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:79320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.945 [2024-04-15 16:15:16.663199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.945 [2024-04-15 16:15:16.663210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:79328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.945 [2024-04-15 16:15:16.663220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.945 [2024-04-15 16:15:16.663231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:79336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.945 [2024-04-15 16:15:16.663240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.945 [2024-04-15 16:15:16.663251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.945 [2024-04-15 16:15:16.663261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.945 [2024-04-15 16:15:16.663272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.945 [2024-04-15 16:15:16.663281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.945 [2024-04-15 16:15:16.663292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:79360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:46.945 [2024-04-15 16:15:16.663302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.945 [2024-04-15 16:15:16.663313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:78712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.945 [2024-04-15 16:15:16.663322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.945 [2024-04-15 16:15:16.663333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:78720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.945 [2024-04-15 16:15:16.663343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.945 [2024-04-15 16:15:16.663354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:78728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.945 [2024-04-15 16:15:16.663365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.945 [2024-04-15 16:15:16.663376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:78736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.945 [2024-04-15 16:15:16.663385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.945 [2024-04-15 16:15:16.663396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:78744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.945 [2024-04-15 16:15:16.663406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.945 [2024-04-15 16:15:16.663417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.945 [2024-04-15 16:15:16.663426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.945 [2024-04-15 16:15:16.663438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.945 [2024-04-15 16:15:16.663447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.945 [2024-04-15 16:15:16.663458] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79d840 is same with the state(5) to be set 00:22:46.945 [2024-04-15 16:15:16.663471] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:46.945 [2024-04-15 16:15:16.663478] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:46.945 [2024-04-15 16:15:16.663487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78768 len:8 PRP1 0x0 PRP2 0x0 00:22:46.945 [2024-04-15 16:15:16.663496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.945 [2024-04-15 16:15:16.663507] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:46.945 [2024-04-15 16:15:16.663515] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:46.945 [2024-04-15 16:15:16.663522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79368 len:8 PRP1 0x0 PRP2 0x0 00:22:46.945 [2024-04-15 16:15:16.663533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.945 [2024-04-15 16:15:16.663543] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:46.945 [2024-04-15 16:15:16.663550] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:46.945 [2024-04-15 16:15:16.663558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79376 len:8 PRP1 0x0 PRP2 0x0 00:22:46.945 [2024-04-15 16:15:16.663567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.945 [2024-04-15 16:15:16.663577] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:46.945 [2024-04-15 16:15:16.663584] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:46.945 [2024-04-15 16:15:16.663592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79384 len:8 PRP1 0x0 PRP2 0x0 00:22:46.945 [2024-04-15 16:15:16.663602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.945 [2024-04-15 16:15:16.663617] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:46.945 [2024-04-15 16:15:16.663626] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:46.945 [2024-04-15 16:15:16.663634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79392 len:8 PRP1 0x0 PRP2 0x0 00:22:46.945 [2024-04-15 16:15:16.663643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.945 [2024-04-15 16:15:16.663653] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:46.945 [2024-04-15 16:15:16.663660] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:46.945 [2024-04-15 16:15:16.663670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79400 len:8 PRP1 0x0 PRP2 0x0 00:22:46.945 [2024-04-15 16:15:16.663679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.945 [2024-04-15 16:15:16.663688] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:46.945 [2024-04-15 16:15:16.663696] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:46.945 [2024-04-15 16:15:16.663704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79408 len:8 PRP1 0x0 PRP2 0x0 00:22:46.945 [2024-04-15 16:15:16.663713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.945 [2024-04-15 16:15:16.663723] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:46.945 [2024-04-15 16:15:16.663730] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:46.945 [2024-04-15 16:15:16.663738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79416 len:8 PRP1 0x0 PRP2 0x0 00:22:46.945 [2024-04-15 16:15:16.663747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.945 [2024-04-15 16:15:16.663757] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:46.945 [2024-04-15 16:15:16.663764] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:46.945 [2024-04-15 16:15:16.663772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79424 len:8 PRP1 0x0 PRP2 0x0 00:22:46.945 [2024-04-15 16:15:16.663782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.945 [2024-04-15 16:15:16.663791] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:46.945 [2024-04-15 16:15:16.663799] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:46.945 [2024-04-15 16:15:16.663808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79432 len:8 PRP1 0x0 PRP2 0x0 00:22:46.945 [2024-04-15 16:15:16.663818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.945 [2024-04-15 16:15:16.663828] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:46.945 [2024-04-15 16:15:16.663835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:46.945 [2024-04-15 16:15:16.663843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79440 len:8 PRP1 0x0 PRP2 0x0 00:22:46.945 [2024-04-15 16:15:16.663852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.945 [2024-04-15 16:15:16.663862] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:46.945 [2024-04-15 16:15:16.663869] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:46.945 [2024-04-15 16:15:16.663877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79448 len:8 PRP1 0x0 PRP2 0x0 00:22:46.945 [2024-04-15 16:15:16.663886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.945 [2024-04-15 16:15:16.663896] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:46.945 [2024-04-15 16:15:16.663903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:46.945 [2024-04-15 16:15:16.663912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79456 len:8 PRP1 0x0 PRP2 0x0 00:22:46.945 [2024-04-15 16:15:16.663922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.945 [2024-04-15 16:15:16.663932] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:46.945 [2024-04-15 16:15:16.663939] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:46.945 [2024-04-15 16:15:16.663948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79464 len:8 PRP1 0x0 PRP2 0x0 00:22:46.945 [2024-04-15 16:15:16.663957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.945 [2024-04-15 16:15:16.663967] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:46.945 [2024-04-15 16:15:16.663974] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:46.945 [2024-04-15 16:15:16.663982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79472 len:8 PRP1 0x0 PRP2 0x0 00:22:46.945 [2024-04-15 16:15:16.663992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.945 [2024-04-15 16:15:16.664002] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:46.945 [2024-04-15 16:15:16.664009] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:46.945 [2024-04-15 16:15:16.664017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78776 len:8 PRP1 0x0 PRP2 0x0 00:22:46.945 [2024-04-15 16:15:16.664026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.945 [2024-04-15 16:15:16.664036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:46.945 [2024-04-15 16:15:16.664044] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:46.945 [2024-04-15 16:15:16.664052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78784 len:8 PRP1 0x0 PRP2 0x0 00:22:46.945 [2024-04-15 16:15:16.664061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.945 [2024-04-15 16:15:16.664071] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:46.945 [2024-04-15 16:15:16.664079] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:46.945 [2024-04-15 16:15:16.664088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78792 len:8 PRP1 0x0 PRP2 0x0 00:22:46.945 [2024-04-15 16:15:16.664098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.945 [2024-04-15 16:15:16.664107] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:46.945 [2024-04-15 16:15:16.664115] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:46.945 [2024-04-15 16:15:16.664123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78800 len:8 PRP1 0x0 PRP2 0x0 00:22:46.945 [2024-04-15 16:15:16.664132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.945 [2024-04-15 16:15:16.664141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:46.945 [2024-04-15 16:15:16.664149] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:46.945 [2024-04-15 16:15:16.664157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78808 len:8 PRP1 0x0 PRP2 0x0 00:22:46.945 [2024-04-15 16:15:16.664166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.945 [2024-04-15 16:15:16.664176] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:46.945 [2024-04-15 16:15:16.664184] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:46.945 [2024-04-15 16:15:16.664192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78816 len:8 PRP1 0x0 PRP2 0x0 00:22:46.945 [2024-04-15 16:15:16.664201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.945 [2024-04-15 16:15:16.664211] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:46.945 [2024-04-15 16:15:16.664218] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:46.945 [2024-04-15 16:15:16.664228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78824 len:8 PRP1 0x0 PRP2 0x0 00:22:46.945 [2024-04-15 16:15:16.676837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.946 [2024-04-15 16:15:16.676883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:46.946 [2024-04-15 16:15:16.676895] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:46.946 [2024-04-15 16:15:16.676909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78832 len:8 PRP1 0x0 PRP2 0x0 00:22:46.946 [2024-04-15 16:15:16.676923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.946 [2024-04-15 16:15:16.676993] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x79d840 was disconnected and freed. reset controller. 00:22:46.946 [2024-04-15 16:15:16.677110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.946 [2024-04-15 16:15:16.677128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.946 [2024-04-15 16:15:16.677145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.946 [2024-04-15 16:15:16.677159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.946 [2024-04-15 16:15:16.677173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.946 [2024-04-15 16:15:16.677187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.946 [2024-04-15 16:15:16.677202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.946 [2024-04-15 16:15:16.677216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.946 [2024-04-15 16:15:16.677230] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76f1e0 is same with the state(5) to be set 00:22:46.946 [2024-04-15 16:15:16.677554] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:46.946 [2024-04-15 16:15:16.677669] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x76f1e0 (9): Bad file descriptor 00:22:46.946 [2024-04-15 16:15:16.677837] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:46.946 [2024-04-15 16:15:16.677990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:46.946 [2024-04-15 16:15:16.678075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:46.946 [2024-04-15 16:15:16.678116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x76f1e0 with addr=10.0.0.2, port=4420 00:22:46.946 [2024-04-15 16:15:16.678142] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76f1e0 is same with the state(5) to be set 00:22:46.946 [2024-04-15 16:15:16.678183] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x76f1e0 (9): Bad file descriptor 00:22:46.946 [2024-04-15 16:15:16.678213] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:46.946 [2024-04-15 16:15:16.678233] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:46.946 [2024-04-15 16:15:16.678256] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:46.946 [2024-04-15 16:15:16.678301] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:46.946 [2024-04-15 16:15:16.678328] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:46.946 16:15:16 -- host/timeout.sh@56 -- # sleep 2 00:22:48.884 [2024-04-15 16:15:18.678500] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.884 [2024-04-15 16:15:18.678611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.885 [2024-04-15 16:15:18.678647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.885 [2024-04-15 16:15:18.678661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x76f1e0 with addr=10.0.0.2, port=4420 00:22:48.885 [2024-04-15 16:15:18.678675] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76f1e0 is same with the state(5) to be set 00:22:48.885 [2024-04-15 16:15:18.678701] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x76f1e0 (9): Bad file descriptor 00:22:48.885 [2024-04-15 16:15:18.678720] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:48.885 [2024-04-15 16:15:18.678730] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:48.885 [2024-04-15 16:15:18.678741] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:48.885 [2024-04-15 16:15:18.678767] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:48.885 [2024-04-15 16:15:18.678777] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:48.885 16:15:18 -- host/timeout.sh@57 -- # get_controller 00:22:48.885 16:15:18 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:48.885 16:15:18 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:22:49.142 16:15:18 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:22:49.142 16:15:18 -- host/timeout.sh@58 -- # get_bdev 00:22:49.142 16:15:18 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:22:49.142 16:15:18 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:22:49.400 16:15:19 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:22:49.400 16:15:19 -- host/timeout.sh@61 -- # sleep 5 00:22:50.850 [2024-04-15 16:15:20.678933] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.850 [2024-04-15 16:15:20.679040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.850 [2024-04-15 16:15:20.679081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.850 [2024-04-15 16:15:20.679095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x76f1e0 with addr=10.0.0.2, port=4420 00:22:50.850 [2024-04-15 16:15:20.679110] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76f1e0 is same with the state(5) to be set 00:22:50.850 [2024-04-15 16:15:20.679137] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x76f1e0 (9): Bad file descriptor 00:22:50.850 [2024-04-15 16:15:20.679156] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:50.850 [2024-04-15 16:15:20.679167] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:50.850 [2024-04-15 16:15:20.679179] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:50.850 [2024-04-15 16:15:20.679205] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:50.850 [2024-04-15 16:15:20.679216] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:52.749 [2024-04-15 16:15:22.679281] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:54.124 00:22:54.124 Latency(us) 00:22:54.124 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.124 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:54.124 Verification LBA range: start 0x0 length 0x4000 00:22:54.124 NVMe0n1 : 8.09 1211.71 4.73 15.82 0.00 104154.43 3089.55 7030452.42 00:22:54.124 =================================================================================================================== 00:22:54.124 Total : 1211.71 4.73 15.82 0.00 104154.43 3089.55 7030452.42 00:22:54.124 0 00:22:54.382 16:15:24 -- host/timeout.sh@62 -- # get_controller 00:22:54.383 16:15:24 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:54.383 16:15:24 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:22:54.640 16:15:24 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:22:54.640 16:15:24 -- host/timeout.sh@63 -- # get_bdev 00:22:54.640 16:15:24 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:22:54.640 16:15:24 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:22:54.898 16:15:24 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:22:54.898 16:15:24 -- host/timeout.sh@65 -- # wait 93194 00:22:54.898 16:15:24 -- host/timeout.sh@67 -- # killprocess 93182 00:22:54.898 16:15:24 -- common/autotest_common.sh@936 -- # '[' -z 93182 ']' 00:22:54.898 16:15:24 -- common/autotest_common.sh@940 -- # kill -0 93182 00:22:54.898 16:15:24 -- common/autotest_common.sh@941 -- # uname 00:22:54.898 16:15:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:54.898 16:15:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93182 00:22:54.898 16:15:24 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:22:54.898 killing process with pid 93182 00:22:54.898 16:15:24 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:22:54.898 16:15:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93182' 00:22:54.898 Received shutdown signal, test time was about 9.191971 seconds 00:22:54.898 00:22:54.898 Latency(us) 00:22:54.898 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.898 =================================================================================================================== 00:22:54.899 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:54.899 16:15:24 -- common/autotest_common.sh@955 -- # kill 93182 00:22:54.899 16:15:24 -- common/autotest_common.sh@960 -- # wait 93182 00:22:55.157 16:15:24 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:55.415 [2024-04-15 16:15:25.247781] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:55.415 16:15:25 -- host/timeout.sh@74 -- # bdevperf_pid=93314 00:22:55.415 16:15:25 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:22:55.415 16:15:25 -- host/timeout.sh@76 -- # waitforlisten 93314 /var/tmp/bdevperf.sock 00:22:55.415 16:15:25 -- common/autotest_common.sh@817 -- # '[' -z 93314 ']' 00:22:55.415 16:15:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:55.415 16:15:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:55.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:55.415 16:15:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:55.415 16:15:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:55.415 16:15:25 -- common/autotest_common.sh@10 -- # set +x 00:22:55.415 [2024-04-15 16:15:25.336179] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:22:55.415 [2024-04-15 16:15:25.336317] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93314 ] 00:22:55.673 [2024-04-15 16:15:25.486043] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.673 [2024-04-15 16:15:25.556729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:56.605 16:15:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:56.605 16:15:26 -- common/autotest_common.sh@850 -- # return 0 00:22:56.605 16:15:26 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:56.605 16:15:26 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:22:56.863 NVMe0n1 00:22:57.121 16:15:26 -- host/timeout.sh@84 -- # rpc_pid=93339 00:22:57.121 16:15:26 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:57.121 16:15:26 -- host/timeout.sh@86 -- # sleep 1 00:22:57.121 Running I/O for 10 seconds... 00:22:58.055 16:15:27 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:58.316 [2024-04-15 16:15:28.151165] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c35050 is same with the state(5) to be set 00:22:58.316 [2024-04-15 16:15:28.151231] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c35050 is same with the state(5) to be set 00:22:58.316 [2024-04-15 16:15:28.151247] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c35050 is same with the state(5) to be set 00:22:58.316 [2024-04-15 16:15:28.151261] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c35050 is same with the state(5) to be set 00:22:58.316 [2024-04-15 16:15:28.151276] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c35050 is same with the state(5) to be set 00:22:58.316 [2024-04-15 16:15:28.151290] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c35050 is same with the state(5) to be set 00:22:58.316 [2024-04-15 16:15:28.151304] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c35050 is same with the state(5) to be set 00:22:58.316 [2024-04-15 16:15:28.151319] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c35050 is same with the state(5) to be set 00:22:58.316 [2024-04-15 16:15:28.151332] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c35050 is same with the state(5) to be set 00:22:58.316 [2024-04-15 16:15:28.151346] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c35050 is same with the state(5) to be set 00:22:58.316 [2024-04-15 16:15:28.151360] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c35050 is same with the state(5) to be set 00:22:58.316 [2024-04-15 16:15:28.151374] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c35050 is same with the state(5) to be set 00:22:58.316 [2024-04-15 16:15:28.151388] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c35050 is same with the state(5) to be set 00:22:58.316 [2024-04-15 16:15:28.151402] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c35050 is same with the state(5) to be set 00:22:58.316 [2024-04-15 16:15:28.151416] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c35050 is same with the state(5) to be set 00:22:58.316 [2024-04-15 16:15:28.151429] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c35050 is same with the state(5) to be set 00:22:58.316 [2024-04-15 16:15:28.151443] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c35050 is same with the state(5) to be set 00:22:58.316 [2024-04-15 16:15:28.151457] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c35050 is same with the state(5) to be set 00:22:58.316 [2024-04-15 16:15:28.151471] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c35050 is same with the state(5) to be set 00:22:58.316 [2024-04-15 16:15:28.151485] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c35050 is same with the state(5) to be set 00:22:58.316 [2024-04-15 16:15:28.151498] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c35050 is same with the state(5) to be set 00:22:58.316 [2024-04-15 16:15:28.151512] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c35050 is same with the state(5) to be set 00:22:58.316 [2024-04-15 16:15:28.151525] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c35050 is same with the state(5) to be set 00:22:58.316 [2024-04-15 16:15:28.151591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:75904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.316 [2024-04-15 16:15:28.151637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.316 [2024-04-15 16:15:28.151659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:75912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.316 [2024-04-15 16:15:28.151671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.316 [2024-04-15 16:15:28.151684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:75920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.316 [2024-04-15 16:15:28.151695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.316 [2024-04-15 16:15:28.151708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:75928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.316 [2024-04-15 16:15:28.151719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.316 [2024-04-15 16:15:28.151731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:75936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.316 [2024-04-15 16:15:28.151742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.316 [2024-04-15 16:15:28.151754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:75944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.316 [2024-04-15 16:15:28.151764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.316 [2024-04-15 16:15:28.151777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:75952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.316 [2024-04-15 16:15:28.151787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.316 [2024-04-15 16:15:28.151800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:75960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.316 [2024-04-15 16:15:28.151810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.316 [2024-04-15 16:15:28.151822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:75968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.316 [2024-04-15 16:15:28.151833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.316 [2024-04-15 16:15:28.151845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:75976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.316 [2024-04-15 16:15:28.151856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.316 [2024-04-15 16:15:28.151868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:75984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.316 [2024-04-15 16:15:28.151879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.316 [2024-04-15 16:15:28.151891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:75992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.316 [2024-04-15 16:15:28.151901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.316 [2024-04-15 16:15:28.151914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:76000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.316 [2024-04-15 16:15:28.151924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.316 [2024-04-15 16:15:28.151936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:76008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.316 [2024-04-15 16:15:28.151947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.316 [2024-04-15 16:15:28.151959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:76016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.317 [2024-04-15 16:15:28.151970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.317 [2024-04-15 16:15:28.151982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:76024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.317 [2024-04-15 16:15:28.151993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.317 [2024-04-15 16:15:28.152005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:76032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.317 [2024-04-15 16:15:28.152018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.317 [2024-04-15 16:15:28.152030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:76040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.317 [2024-04-15 16:15:28.152041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.317 [2024-04-15 16:15:28.152054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:76048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.317 [2024-04-15 16:15:28.152064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.317 [2024-04-15 16:15:28.152077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:76056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.317 [2024-04-15 16:15:28.152087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.317 [2024-04-15 16:15:28.152099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:76064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.317 [2024-04-15 16:15:28.152110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.317 [2024-04-15 16:15:28.152123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:76072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.317 [2024-04-15 16:15:28.152133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.317 [2024-04-15 16:15:28.152145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:76080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.317 [2024-04-15 16:15:28.152156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.317 [2024-04-15 16:15:28.152168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:76088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.317 [2024-04-15 16:15:28.152179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.317 [2024-04-15 16:15:28.152191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.317 [2024-04-15 16:15:28.152202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.317 [2024-04-15 16:15:28.152214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:76424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.317 [2024-04-15 16:15:28.152225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.317 [2024-04-15 16:15:28.152237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.317 [2024-04-15 16:15:28.152248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.317 [2024-04-15 16:15:28.152260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:76440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.317 [2024-04-15 16:15:28.152270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.317 [2024-04-15 16:15:28.152283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:76448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.317 [2024-04-15 16:15:28.152293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.317 [2024-04-15 16:15:28.152306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:76456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.317 [2024-04-15 16:15:28.152316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.317 [2024-04-15 16:15:28.152329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:76464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.317 [2024-04-15 16:15:28.152339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.317 [2024-04-15 16:15:28.152352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:76472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.317 [2024-04-15 16:15:28.152363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.317 [2024-04-15 16:15:28.152375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:76480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.317 [2024-04-15 16:15:28.152387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.317 [2024-04-15 16:15:28.152399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:76488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.317 [2024-04-15 16:15:28.152410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.317 [2024-04-15 16:15:28.152422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:76496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.317 [2024-04-15 16:15:28.152432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.317 [2024-04-15 16:15:28.152445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:76504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.317 [2024-04-15 16:15:28.152455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.317 [2024-04-15 16:15:28.152467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:76512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.317 [2024-04-15 16:15:28.152478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.317 [2024-04-15 16:15:28.152490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:76520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.317 [2024-04-15 16:15:28.152500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.317 [2024-04-15 16:15:28.152513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:76528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.317 [2024-04-15 16:15:28.152523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.317 [2024-04-15 16:15:28.152535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:76536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.317 [2024-04-15 16:15:28.152546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.317 [2024-04-15 16:15:28.152558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:76096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.317 [2024-04-15 16:15:28.152569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.317 [2024-04-15 16:15:28.152592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:76104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.318 [2024-04-15 16:15:28.152603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.318 [2024-04-15 16:15:28.152615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:76112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.318 [2024-04-15 16:15:28.152626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.318 [2024-04-15 16:15:28.152638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:76120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.318 [2024-04-15 16:15:28.152648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.318 [2024-04-15 16:15:28.152661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:76128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.318 [2024-04-15 16:15:28.152671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.318 [2024-04-15 16:15:28.152683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:76136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.318 [2024-04-15 16:15:28.152694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.318 [2024-04-15 16:15:28.152706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:76144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.318 [2024-04-15 16:15:28.152716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.318 [2024-04-15 16:15:28.152729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:76152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.318 [2024-04-15 16:15:28.152740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.318 [2024-04-15 16:15:28.152752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:76160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.318 [2024-04-15 16:15:28.152763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.318 [2024-04-15 16:15:28.152775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:76168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.318 [2024-04-15 16:15:28.152785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.318 [2024-04-15 16:15:28.152798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:76176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.318 [2024-04-15 16:15:28.152808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.318 [2024-04-15 16:15:28.152820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:76184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.318 [2024-04-15 16:15:28.152831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.318 [2024-04-15 16:15:28.152843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:76192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.318 [2024-04-15 16:15:28.152853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.318 [2024-04-15 16:15:28.152866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:76200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.318 [2024-04-15 16:15:28.152876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.318 [2024-04-15 16:15:28.152888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:76208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.318 [2024-04-15 16:15:28.152899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.318 [2024-04-15 16:15:28.152911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:76216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.318 [2024-04-15 16:15:28.152921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.318 [2024-04-15 16:15:28.152934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:76544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.318 [2024-04-15 16:15:28.152944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.318 [2024-04-15 16:15:28.152957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:76552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.318 [2024-04-15 16:15:28.152967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.318 [2024-04-15 16:15:28.152980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:76560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.318 [2024-04-15 16:15:28.152990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.318 [2024-04-15 16:15:28.153002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:76568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.318 [2024-04-15 16:15:28.153013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.318 [2024-04-15 16:15:28.153025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:76576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.318 [2024-04-15 16:15:28.153035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.318 [2024-04-15 16:15:28.153048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.318 [2024-04-15 16:15:28.153058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.318 [2024-04-15 16:15:28.153070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:76592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.318 [2024-04-15 16:15:28.153081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.318 [2024-04-15 16:15:28.153094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.318 [2024-04-15 16:15:28.153104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.318 [2024-04-15 16:15:28.153117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:76224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.318 [2024-04-15 16:15:28.153127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.318 [2024-04-15 16:15:28.153139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:76232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.318 [2024-04-15 16:15:28.153150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.318 [2024-04-15 16:15:28.153162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:76240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.318 [2024-04-15 16:15:28.153173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.318 [2024-04-15 16:15:28.153185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:76248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.318 [2024-04-15 16:15:28.153195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.319 [2024-04-15 16:15:28.153208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.319 [2024-04-15 16:15:28.153218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.319 [2024-04-15 16:15:28.153230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:76264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.319 [2024-04-15 16:15:28.153241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.319 [2024-04-15 16:15:28.153254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:76272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.319 [2024-04-15 16:15:28.153264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.319 [2024-04-15 16:15:28.153277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:76280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.319 [2024-04-15 16:15:28.153288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.319 [2024-04-15 16:15:28.153300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:76608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.319 [2024-04-15 16:15:28.153310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.319 [2024-04-15 16:15:28.153323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:76616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.319 [2024-04-15 16:15:28.153333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.319 [2024-04-15 16:15:28.153346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.319 [2024-04-15 16:15:28.153356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.319 [2024-04-15 16:15:28.153369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:76632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.319 [2024-04-15 16:15:28.153379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.319 [2024-04-15 16:15:28.153391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:76640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.319 [2024-04-15 16:15:28.153402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.319 [2024-04-15 16:15:28.153414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:76648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.319 [2024-04-15 16:15:28.153425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.319 [2024-04-15 16:15:28.153437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:76656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.319 [2024-04-15 16:15:28.153449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.319 [2024-04-15 16:15:28.153462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:76664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.319 [2024-04-15 16:15:28.153472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.319 [2024-04-15 16:15:28.153485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:76672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.319 [2024-04-15 16:15:28.153495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.319 [2024-04-15 16:15:28.153507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:76680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.319 [2024-04-15 16:15:28.153518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.319 [2024-04-15 16:15:28.153530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:76688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.319 [2024-04-15 16:15:28.153540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.319 [2024-04-15 16:15:28.153552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:76696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.319 [2024-04-15 16:15:28.153563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.319 [2024-04-15 16:15:28.153583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:76704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.319 [2024-04-15 16:15:28.153594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.319 [2024-04-15 16:15:28.153606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:76712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.319 [2024-04-15 16:15:28.153626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.319 [2024-04-15 16:15:28.153638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:76720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.319 [2024-04-15 16:15:28.153649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.319 [2024-04-15 16:15:28.153665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:76728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.319 [2024-04-15 16:15:28.153679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.319 [2024-04-15 16:15:28.153700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.319 [2024-04-15 16:15:28.153716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.319 [2024-04-15 16:15:28.153729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:76744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.319 [2024-04-15 16:15:28.153740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.319 [2024-04-15 16:15:28.153753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:76752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.319 [2024-04-15 16:15:28.153763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.319 [2024-04-15 16:15:28.153776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:76760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.319 [2024-04-15 16:15:28.153786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.319 [2024-04-15 16:15:28.153798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:76768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.319 [2024-04-15 16:15:28.153809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.319 [2024-04-15 16:15:28.153822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.319 [2024-04-15 16:15:28.153832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.319 [2024-04-15 16:15:28.153845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:76784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.319 [2024-04-15 16:15:28.153856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.319 [2024-04-15 16:15:28.153868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:76792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.320 [2024-04-15 16:15:28.153879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.320 [2024-04-15 16:15:28.153892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:76288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.320 [2024-04-15 16:15:28.153902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.320 [2024-04-15 16:15:28.153915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:76296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.320 [2024-04-15 16:15:28.153925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.320 [2024-04-15 16:15:28.153938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:76304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.320 [2024-04-15 16:15:28.153948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.320 [2024-04-15 16:15:28.153960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:76312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.320 [2024-04-15 16:15:28.153971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.320 [2024-04-15 16:15:28.153983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:76320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.320 [2024-04-15 16:15:28.153994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.320 [2024-04-15 16:15:28.154006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:76328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.320 [2024-04-15 16:15:28.154016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.320 [2024-04-15 16:15:28.154029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:76336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.320 [2024-04-15 16:15:28.154039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.320 [2024-04-15 16:15:28.154051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:76344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.320 [2024-04-15 16:15:28.154062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.320 [2024-04-15 16:15:28.154074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:76800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.320 [2024-04-15 16:15:28.154085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.320 [2024-04-15 16:15:28.154097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:76808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.320 [2024-04-15 16:15:28.154107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.320 [2024-04-15 16:15:28.154119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:76816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.320 [2024-04-15 16:15:28.154130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.320 [2024-04-15 16:15:28.154142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:76824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.320 [2024-04-15 16:15:28.154152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.320 [2024-04-15 16:15:28.154164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.320 [2024-04-15 16:15:28.154176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.320 [2024-04-15 16:15:28.154188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:76840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.320 [2024-04-15 16:15:28.154199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.320 [2024-04-15 16:15:28.154211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:76848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.320 [2024-04-15 16:15:28.154222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.320 [2024-04-15 16:15:28.154234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:76856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.320 [2024-04-15 16:15:28.154245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.320 [2024-04-15 16:15:28.154257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:76864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.320 [2024-04-15 16:15:28.154268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.320 [2024-04-15 16:15:28.154281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:76872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.320 [2024-04-15 16:15:28.154291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.320 [2024-04-15 16:15:28.154304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:76880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.320 [2024-04-15 16:15:28.154314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.320 [2024-04-15 16:15:28.154326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:76888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.320 [2024-04-15 16:15:28.154336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.320 [2024-04-15 16:15:28.154349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:76896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.320 [2024-04-15 16:15:28.154359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.320 [2024-04-15 16:15:28.154371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:76904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.320 [2024-04-15 16:15:28.154382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.320 [2024-04-15 16:15:28.154395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:76912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.320 [2024-04-15 16:15:28.154406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.320 [2024-04-15 16:15:28.154418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:76920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.320 [2024-04-15 16:15:28.154428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.320 [2024-04-15 16:15:28.154441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.320 [2024-04-15 16:15:28.154451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.320 [2024-04-15 16:15:28.154464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:76360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.320 [2024-04-15 16:15:28.154474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.320 [2024-04-15 16:15:28.154486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.321 [2024-04-15 16:15:28.154497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.321 [2024-04-15 16:15:28.154509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:76376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.321 [2024-04-15 16:15:28.154520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.321 [2024-04-15 16:15:28.154537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:76384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.321 [2024-04-15 16:15:28.154548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.321 [2024-04-15 16:15:28.154560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:76392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.321 [2024-04-15 16:15:28.154572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.321 [2024-04-15 16:15:28.154592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:76400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.321 [2024-04-15 16:15:28.154603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.321 [2024-04-15 16:15:28.154615] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed76a0 is same with the state(5) to be set 00:22:58.321 [2024-04-15 16:15:28.154629] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.321 [2024-04-15 16:15:28.154638] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.321 [2024-04-15 16:15:28.154647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76408 len:8 PRP1 0x0 PRP2 0x0 00:22:58.321 [2024-04-15 16:15:28.154658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.321 [2024-04-15 16:15:28.154709] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xed76a0 was disconnected and freed. reset controller. 00:22:58.321 [2024-04-15 16:15:28.154788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.321 [2024-04-15 16:15:28.154802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.321 [2024-04-15 16:15:28.154814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.321 [2024-04-15 16:15:28.154825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.321 [2024-04-15 16:15:28.154836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.321 [2024-04-15 16:15:28.154846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.321 [2024-04-15 16:15:28.154858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.321 [2024-04-15 16:15:28.154868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.321 [2024-04-15 16:15:28.154879] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd020 is same with the state(5) to be set 00:22:58.321 [2024-04-15 16:15:28.155126] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:58.321 [2024-04-15 16:15:28.155170] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecd020 (9): Bad file descriptor 00:22:58.321 [2024-04-15 16:15:28.155264] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.321 [2024-04-15 16:15:28.155331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.321 [2024-04-15 16:15:28.155369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.321 [2024-04-15 16:15:28.155383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecd020 with addr=10.0.0.2, port=4420 00:22:58.321 [2024-04-15 16:15:28.155394] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd020 is same with the state(5) to be set 00:22:58.321 [2024-04-15 16:15:28.155411] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecd020 (9): Bad file descriptor 00:22:58.321 [2024-04-15 16:15:28.155427] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:58.321 [2024-04-15 16:15:28.155438] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:58.321 [2024-04-15 16:15:28.155450] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:58.321 [2024-04-15 16:15:28.155472] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.321 [2024-04-15 16:15:28.155484] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:58.321 16:15:28 -- host/timeout.sh@90 -- # sleep 1 00:22:59.255 [2024-04-15 16:15:29.155653] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.255 [2024-04-15 16:15:29.155777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.255 [2024-04-15 16:15:29.155821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.255 [2024-04-15 16:15:29.155837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecd020 with addr=10.0.0.2, port=4420 00:22:59.255 [2024-04-15 16:15:29.155852] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd020 is same with the state(5) to be set 00:22:59.255 [2024-04-15 16:15:29.155879] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecd020 (9): Bad file descriptor 00:22:59.255 [2024-04-15 16:15:29.155899] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:59.255 [2024-04-15 16:15:29.155909] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:59.255 [2024-04-15 16:15:29.155921] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:59.255 [2024-04-15 16:15:29.155946] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.255 [2024-04-15 16:15:29.155958] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:59.255 16:15:29 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:59.514 [2024-04-15 16:15:29.453173] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:59.772 16:15:29 -- host/timeout.sh@92 -- # wait 93339 00:23:00.339 [2024-04-15 16:15:30.171262] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:08.455 00:23:08.455 Latency(us) 00:23:08.455 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.455 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:08.455 Verification LBA range: start 0x0 length 0x4000 00:23:08.455 NVMe0n1 : 10.01 6688.44 26.13 0.00 0.00 19098.69 1248.30 3019898.88 00:23:08.455 =================================================================================================================== 00:23:08.455 Total : 6688.44 26.13 0.00 0.00 19098.69 1248.30 3019898.88 00:23:08.455 0 00:23:08.455 16:15:36 -- host/timeout.sh@97 -- # rpc_pid=93444 00:23:08.455 16:15:36 -- host/timeout.sh@98 -- # sleep 1 00:23:08.455 16:15:36 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:08.455 Running I/O for 10 seconds... 00:23:08.455 16:15:37 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:08.455 [2024-04-15 16:15:38.195245] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:08.456 [2024-04-15 16:15:38.195302] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:08.456 [2024-04-15 16:15:38.195314] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:08.456 [2024-04-15 16:15:38.195325] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:08.456 [2024-04-15 16:15:38.195335] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:08.456 [2024-04-15 16:15:38.195345] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:08.456 [2024-04-15 16:15:38.195355] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:08.456 [2024-04-15 16:15:38.195364] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:08.456 [2024-04-15 16:15:38.195374] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:08.456 [2024-04-15 16:15:38.195384] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:08.456 [2024-04-15 16:15:38.195393] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:08.456 [2024-04-15 16:15:38.195403] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:08.456 [2024-04-15 16:15:38.195413] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:08.456 [2024-04-15 16:15:38.195422] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:08.456 [2024-04-15 16:15:38.195431] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:08.456 [2024-04-15 16:15:38.195441] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:08.456 [2024-04-15 16:15:38.195451] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:08.456 [2024-04-15 16:15:38.195460] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:08.456 [2024-04-15 16:15:38.195470] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32440 is same with the state(5) to be set 00:23:08.456 [2024-04-15 16:15:38.195527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:80160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.456 [2024-04-15 16:15:38.195560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.456 [2024-04-15 16:15:38.195595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:80168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.456 [2024-04-15 16:15:38.195608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.456 [2024-04-15 16:15:38.195621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:80176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.456 [2024-04-15 16:15:38.195632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.456 [2024-04-15 16:15:38.195645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:80184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.456 [2024-04-15 16:15:38.195655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.456 [2024-04-15 16:15:38.195668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:80192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.456 [2024-04-15 16:15:38.195678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.456 [2024-04-15 16:15:38.195691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:80200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.456 [2024-04-15 16:15:38.195701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.456 [2024-04-15 16:15:38.195714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:80208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.456 [2024-04-15 16:15:38.195724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.456 [2024-04-15 16:15:38.195737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:80216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.456 [2024-04-15 16:15:38.195748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.456 [2024-04-15 16:15:38.195760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:80224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.456 [2024-04-15 16:15:38.195771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.456 [2024-04-15 16:15:38.195783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:80232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.456 [2024-04-15 16:15:38.195794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.456 [2024-04-15 16:15:38.195806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:80240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.456 [2024-04-15 16:15:38.195816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.456 [2024-04-15 16:15:38.195828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:80248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.456 [2024-04-15 16:15:38.195839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.456 [2024-04-15 16:15:38.195851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:80256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.456 [2024-04-15 16:15:38.195861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.456 [2024-04-15 16:15:38.195874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:80264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.456 [2024-04-15 16:15:38.195884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.456 [2024-04-15 16:15:38.195896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:80272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.456 [2024-04-15 16:15:38.195907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.456 [2024-04-15 16:15:38.195919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:80280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.456 [2024-04-15 16:15:38.195930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.456 [2024-04-15 16:15:38.195942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:80288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.456 [2024-04-15 16:15:38.195954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.456 [2024-04-15 16:15:38.195966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:80296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.456 [2024-04-15 16:15:38.195977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.456 [2024-04-15 16:15:38.195989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.456 [2024-04-15 16:15:38.195999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.456 [2024-04-15 16:15:38.196011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:80312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.456 [2024-04-15 16:15:38.196022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.456 [2024-04-15 16:15:38.196034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.456 [2024-04-15 16:15:38.196045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.456 [2024-04-15 16:15:38.196057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.456 [2024-04-15 16:15:38.196067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.456 [2024-04-15 16:15:38.196080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:80656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.456 [2024-04-15 16:15:38.196090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.456 [2024-04-15 16:15:38.196102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.456 [2024-04-15 16:15:38.196113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.456 [2024-04-15 16:15:38.196125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:80672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.456 [2024-04-15 16:15:38.196135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.456 [2024-04-15 16:15:38.196147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.456 [2024-04-15 16:15:38.196158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.456 [2024-04-15 16:15:38.196170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:80688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.456 [2024-04-15 16:15:38.196180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.456 [2024-04-15 16:15:38.196192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.456 [2024-04-15 16:15:38.196203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.456 [2024-04-15 16:15:38.196215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.456 [2024-04-15 16:15:38.196226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.456 [2024-04-15 16:15:38.196238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:80328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.456 [2024-04-15 16:15:38.196248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.457 [2024-04-15 16:15:38.196260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:80336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.457 [2024-04-15 16:15:38.196271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.457 [2024-04-15 16:15:38.196284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:80344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.457 [2024-04-15 16:15:38.196294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.457 [2024-04-15 16:15:38.196307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:80352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.457 [2024-04-15 16:15:38.196318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.457 [2024-04-15 16:15:38.196331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:80360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.457 [2024-04-15 16:15:38.196341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.457 [2024-04-15 16:15:38.196353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:80368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.457 [2024-04-15 16:15:38.196364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.457 [2024-04-15 16:15:38.196376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:80376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.457 [2024-04-15 16:15:38.196386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.457 [2024-04-15 16:15:38.196398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.457 [2024-04-15 16:15:38.196409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.457 [2024-04-15 16:15:38.196421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:80712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.457 [2024-04-15 16:15:38.196431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.457 [2024-04-15 16:15:38.196443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:80720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.457 [2024-04-15 16:15:38.196454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.457 [2024-04-15 16:15:38.196466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:80728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.457 [2024-04-15 16:15:38.196476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.457 [2024-04-15 16:15:38.196489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.457 [2024-04-15 16:15:38.196499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.457 [2024-04-15 16:15:38.196511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.457 [2024-04-15 16:15:38.196521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.457 [2024-04-15 16:15:38.196534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.457 [2024-04-15 16:15:38.196544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.457 [2024-04-15 16:15:38.196556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:80760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.457 [2024-04-15 16:15:38.196566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.457 [2024-04-15 16:15:38.196589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:80768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.457 [2024-04-15 16:15:38.196599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.457 [2024-04-15 16:15:38.196612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.457 [2024-04-15 16:15:38.196622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.457 [2024-04-15 16:15:38.196634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:80784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.457 [2024-04-15 16:15:38.196645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.457 [2024-04-15 16:15:38.196657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:80792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.457 [2024-04-15 16:15:38.196668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.457 [2024-04-15 16:15:38.196680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:80800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.457 [2024-04-15 16:15:38.196691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.457 [2024-04-15 16:15:38.196703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:80808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.457 [2024-04-15 16:15:38.196714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.457 [2024-04-15 16:15:38.196726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:80816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.457 [2024-04-15 16:15:38.196737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.457 [2024-04-15 16:15:38.196749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.457 [2024-04-15 16:15:38.196759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.457 [2024-04-15 16:15:38.196772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:80832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.457 [2024-04-15 16:15:38.196782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.457 [2024-04-15 16:15:38.196794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:80840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.457 [2024-04-15 16:15:38.196805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.457 [2024-04-15 16:15:38.196817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:80848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.457 [2024-04-15 16:15:38.196827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.457 [2024-04-15 16:15:38.196840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:80856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.457 [2024-04-15 16:15:38.196850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.457 [2024-04-15 16:15:38.196862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:80864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.457 [2024-04-15 16:15:38.196872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.457 [2024-04-15 16:15:38.196885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:80872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.457 [2024-04-15 16:15:38.196895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.457 [2024-04-15 16:15:38.196907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:80880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.457 [2024-04-15 16:15:38.196917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.457 [2024-04-15 16:15:38.196930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.457 [2024-04-15 16:15:38.196940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.457 [2024-04-15 16:15:38.196952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:80384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.457 [2024-04-15 16:15:38.196962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.457 [2024-04-15 16:15:38.196974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:80392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.457 [2024-04-15 16:15:38.196985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.457 [2024-04-15 16:15:38.196997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:80400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.457 [2024-04-15 16:15:38.197008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.457 [2024-04-15 16:15:38.197020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:80408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.457 [2024-04-15 16:15:38.197031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.457 [2024-04-15 16:15:38.197043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:80416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.457 [2024-04-15 16:15:38.197054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.457 [2024-04-15 16:15:38.197066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:80424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.457 [2024-04-15 16:15:38.197076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.457 [2024-04-15 16:15:38.197089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.457 [2024-04-15 16:15:38.197099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.457 [2024-04-15 16:15:38.197111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:80440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.457 [2024-04-15 16:15:38.197122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.457 [2024-04-15 16:15:38.197134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.457 [2024-04-15 16:15:38.197144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.457 [2024-04-15 16:15:38.197156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:80904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.458 [2024-04-15 16:15:38.197167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.458 [2024-04-15 16:15:38.197179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:80912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.458 [2024-04-15 16:15:38.197189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.458 [2024-04-15 16:15:38.197201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.458 [2024-04-15 16:15:38.197212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.458 [2024-04-15 16:15:38.197224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:80928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.458 [2024-04-15 16:15:38.197234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.458 [2024-04-15 16:15:38.197247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:80936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.458 [2024-04-15 16:15:38.197257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.458 [2024-04-15 16:15:38.197269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:80944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.458 [2024-04-15 16:15:38.197280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.458 [2024-04-15 16:15:38.197292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:80952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.458 [2024-04-15 16:15:38.197303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.458 [2024-04-15 16:15:38.197315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.458 [2024-04-15 16:15:38.197325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.458 [2024-04-15 16:15:38.197337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:80968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.458 [2024-04-15 16:15:38.197348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.458 [2024-04-15 16:15:38.197360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.458 [2024-04-15 16:15:38.197371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.458 [2024-04-15 16:15:38.197383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:80984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.458 [2024-04-15 16:15:38.197394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.458 [2024-04-15 16:15:38.197406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:80992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.458 [2024-04-15 16:15:38.197417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.458 [2024-04-15 16:15:38.197430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:81000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.458 [2024-04-15 16:15:38.197441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.458 [2024-04-15 16:15:38.197453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:81008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.458 [2024-04-15 16:15:38.197463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.458 [2024-04-15 16:15:38.197475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:81016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.458 [2024-04-15 16:15:38.197486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.458 [2024-04-15 16:15:38.197498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:80448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.458 [2024-04-15 16:15:38.197508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.458 [2024-04-15 16:15:38.197520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:80456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.458 [2024-04-15 16:15:38.197530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.458 [2024-04-15 16:15:38.197543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:80464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.458 [2024-04-15 16:15:38.197553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.458 [2024-04-15 16:15:38.197565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:80472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.458 [2024-04-15 16:15:38.197584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.458 [2024-04-15 16:15:38.197597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:80480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.458 [2024-04-15 16:15:38.197607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.458 [2024-04-15 16:15:38.197633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:80488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.458 [2024-04-15 16:15:38.197646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.458 [2024-04-15 16:15:38.197659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:80496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.458 [2024-04-15 16:15:38.197670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.458 [2024-04-15 16:15:38.197682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:80504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.458 [2024-04-15 16:15:38.197693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.458 [2024-04-15 16:15:38.197711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:81024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.458 [2024-04-15 16:15:38.197721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.458 [2024-04-15 16:15:38.197734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:81032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.458 [2024-04-15 16:15:38.197744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.458 [2024-04-15 16:15:38.197757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:81040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.458 [2024-04-15 16:15:38.197767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.458 [2024-04-15 16:15:38.197779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:81048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.458 [2024-04-15 16:15:38.197790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.458 [2024-04-15 16:15:38.197802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:81056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.458 [2024-04-15 16:15:38.197813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.458 [2024-04-15 16:15:38.197825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:81064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.458 [2024-04-15 16:15:38.197835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.458 [2024-04-15 16:15:38.197848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:81072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.458 [2024-04-15 16:15:38.197858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.458 [2024-04-15 16:15:38.197870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:81080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.458 [2024-04-15 16:15:38.197881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.458 [2024-04-15 16:15:38.197893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:81088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.458 [2024-04-15 16:15:38.197903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.458 [2024-04-15 16:15:38.197916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:81096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.458 [2024-04-15 16:15:38.197926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.458 [2024-04-15 16:15:38.197938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:81104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.458 [2024-04-15 16:15:38.197949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.458 [2024-04-15 16:15:38.197961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:81112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.458 [2024-04-15 16:15:38.197972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.458 [2024-04-15 16:15:38.197984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:81120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.458 [2024-04-15 16:15:38.197994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.458 [2024-04-15 16:15:38.198006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:81128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.458 [2024-04-15 16:15:38.198017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.458 [2024-04-15 16:15:38.198029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.458 [2024-04-15 16:15:38.198039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.458 [2024-04-15 16:15:38.198052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:81144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.458 [2024-04-15 16:15:38.198063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.458 [2024-04-15 16:15:38.198075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.459 [2024-04-15 16:15:38.198086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.459 [2024-04-15 16:15:38.198098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:81160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.459 [2024-04-15 16:15:38.198108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.459 [2024-04-15 16:15:38.198120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:81168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.459 [2024-04-15 16:15:38.198134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.459 [2024-04-15 16:15:38.198146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:81176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.459 [2024-04-15 16:15:38.198157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.459 [2024-04-15 16:15:38.198169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:80512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.459 [2024-04-15 16:15:38.198179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.459 [2024-04-15 16:15:38.198192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:80520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.459 [2024-04-15 16:15:38.198202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.459 [2024-04-15 16:15:38.198214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:80528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.459 [2024-04-15 16:15:38.198225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.459 [2024-04-15 16:15:38.198237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:80536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.459 [2024-04-15 16:15:38.198248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.459 [2024-04-15 16:15:38.198261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:80544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.459 [2024-04-15 16:15:38.198271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.459 [2024-04-15 16:15:38.198284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:80552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.459 [2024-04-15 16:15:38.198295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.459 [2024-04-15 16:15:38.198307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:80560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.459 [2024-04-15 16:15:38.198318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.459 [2024-04-15 16:15:38.198330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:80568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.459 [2024-04-15 16:15:38.198341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.459 [2024-04-15 16:15:38.198353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:80576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.459 [2024-04-15 16:15:38.198364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.459 [2024-04-15 16:15:38.198376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:80584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.459 [2024-04-15 16:15:38.198386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.459 [2024-04-15 16:15:38.198399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:80592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.459 [2024-04-15 16:15:38.198409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.459 [2024-04-15 16:15:38.198422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:80600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.459 [2024-04-15 16:15:38.198432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.459 [2024-04-15 16:15:38.198444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:80608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.459 [2024-04-15 16:15:38.198455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.459 [2024-04-15 16:15:38.198469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:80616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.459 [2024-04-15 16:15:38.198480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.459 [2024-04-15 16:15:38.198492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:80624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.459 [2024-04-15 16:15:38.198517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.459 [2024-04-15 16:15:38.198529] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef97f0 is same with the state(5) to be set 00:23:08.459 [2024-04-15 16:15:38.198544] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:08.459 [2024-04-15 16:15:38.198553] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:08.459 [2024-04-15 16:15:38.198562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80632 len:8 PRP1 0x0 PRP2 0x0 00:23:08.459 [2024-04-15 16:15:38.198581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.459 [2024-04-15 16:15:38.198632] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xef97f0 was disconnected and freed. reset controller. 00:23:08.459 [2024-04-15 16:15:38.198858] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:08.459 [2024-04-15 16:15:38.198938] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecd020 (9): Bad file descriptor 00:23:08.459 [2024-04-15 16:15:38.199027] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:08.459 [2024-04-15 16:15:38.199071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:08.459 [2024-04-15 16:15:38.199108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:08.459 [2024-04-15 16:15:38.199123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecd020 with addr=10.0.0.2, port=4420 00:23:08.459 [2024-04-15 16:15:38.199134] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd020 is same with the state(5) to be set 00:23:08.459 [2024-04-15 16:15:38.199151] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecd020 (9): Bad file descriptor 00:23:08.459 [2024-04-15 16:15:38.199167] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:08.459 [2024-04-15 16:15:38.199178] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:08.459 [2024-04-15 16:15:38.199190] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:08.459 [2024-04-15 16:15:38.199208] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:08.459 [2024-04-15 16:15:38.199219] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:08.459 16:15:38 -- host/timeout.sh@101 -- # sleep 3 00:23:09.395 [2024-04-15 16:15:39.199367] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.395 [2024-04-15 16:15:39.199471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.395 [2024-04-15 16:15:39.199511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.395 [2024-04-15 16:15:39.199526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecd020 with addr=10.0.0.2, port=4420 00:23:09.395 [2024-04-15 16:15:39.199540] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd020 is same with the state(5) to be set 00:23:09.395 [2024-04-15 16:15:39.199566] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecd020 (9): Bad file descriptor 00:23:09.395 [2024-04-15 16:15:39.199595] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.395 [2024-04-15 16:15:39.199607] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.395 [2024-04-15 16:15:39.199619] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.395 [2024-04-15 16:15:39.199644] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.395 [2024-04-15 16:15:39.199657] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.329 [2024-04-15 16:15:40.199803] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.329 [2024-04-15 16:15:40.199900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.329 [2024-04-15 16:15:40.199939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.329 [2024-04-15 16:15:40.199954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecd020 with addr=10.0.0.2, port=4420 00:23:10.329 [2024-04-15 16:15:40.199968] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd020 is same with the state(5) to be set 00:23:10.329 [2024-04-15 16:15:40.199996] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecd020 (9): Bad file descriptor 00:23:10.329 [2024-04-15 16:15:40.200022] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.329 [2024-04-15 16:15:40.200039] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:10.329 [2024-04-15 16:15:40.200069] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.329 [2024-04-15 16:15:40.200119] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.329 [2024-04-15 16:15:40.200133] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.264 [2024-04-15 16:15:41.204219] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.264 [2024-04-15 16:15:41.204347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.264 [2024-04-15 16:15:41.204398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.264 [2024-04-15 16:15:41.204418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xecd020 with addr=10.0.0.2, port=4420 00:23:11.264 [2024-04-15 16:15:41.204436] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd020 is same with the state(5) to be set 00:23:11.264 [2024-04-15 16:15:41.204808] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecd020 (9): Bad file descriptor 00:23:11.264 [2024-04-15 16:15:41.205145] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.264 [2024-04-15 16:15:41.205178] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.264 [2024-04-15 16:15:41.205198] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.264 [2024-04-15 16:15:41.209775] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.264 [2024-04-15 16:15:41.209824] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.264 16:15:41 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:11.829 [2024-04-15 16:15:41.525751] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:11.829 16:15:41 -- host/timeout.sh@103 -- # wait 93444 00:23:12.399 [2024-04-15 16:15:42.241213] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:17.679 00:23:17.679 Latency(us) 00:23:17.679 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:17.679 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:17.679 Verification LBA range: start 0x0 length 0x4000 00:23:17.679 NVMe0n1 : 10.01 5687.11 22.22 3909.38 0.00 13310.87 561.74 3019898.88 00:23:17.679 =================================================================================================================== 00:23:17.679 Total : 5687.11 22.22 3909.38 0.00 13310.87 0.00 3019898.88 00:23:17.679 0 00:23:17.679 16:15:47 -- host/timeout.sh@105 -- # killprocess 93314 00:23:17.680 16:15:47 -- common/autotest_common.sh@936 -- # '[' -z 93314 ']' 00:23:17.680 16:15:47 -- common/autotest_common.sh@940 -- # kill -0 93314 00:23:17.680 16:15:47 -- common/autotest_common.sh@941 -- # uname 00:23:17.680 16:15:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:17.680 16:15:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93314 00:23:17.680 killing process with pid 93314 00:23:17.680 Received shutdown signal, test time was about 10.000000 seconds 00:23:17.680 00:23:17.680 Latency(us) 00:23:17.680 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:17.680 =================================================================================================================== 00:23:17.680 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:17.680 16:15:47 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:23:17.680 16:15:47 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:23:17.680 16:15:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93314' 00:23:17.680 16:15:47 -- common/autotest_common.sh@955 -- # kill 93314 00:23:17.680 16:15:47 -- common/autotest_common.sh@960 -- # wait 93314 00:23:17.680 16:15:47 -- host/timeout.sh@110 -- # bdevperf_pid=93558 00:23:17.680 16:15:47 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:23:17.680 16:15:47 -- host/timeout.sh@112 -- # waitforlisten 93558 /var/tmp/bdevperf.sock 00:23:17.680 16:15:47 -- common/autotest_common.sh@817 -- # '[' -z 93558 ']' 00:23:17.680 16:15:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:17.680 16:15:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:17.680 16:15:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:17.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:17.680 16:15:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:17.680 16:15:47 -- common/autotest_common.sh@10 -- # set +x 00:23:17.680 [2024-04-15 16:15:47.357354] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:23:17.680 [2024-04-15 16:15:47.357773] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93558 ] 00:23:17.680 [2024-04-15 16:15:47.491552] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.680 [2024-04-15 16:15:47.541835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:17.952 16:15:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:17.952 16:15:47 -- common/autotest_common.sh@850 -- # return 0 00:23:17.952 16:15:47 -- host/timeout.sh@116 -- # dtrace_pid=93561 00:23:17.952 16:15:47 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 93558 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:23:17.952 16:15:47 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:23:18.210 16:15:48 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:23:18.467 NVMe0n1 00:23:18.467 16:15:48 -- host/timeout.sh@124 -- # rpc_pid=93608 00:23:18.467 16:15:48 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:18.467 16:15:48 -- host/timeout.sh@125 -- # sleep 1 00:23:18.467 Running I/O for 10 seconds... 00:23:19.403 16:15:49 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:19.664 [2024-04-15 16:15:49.520706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.664 [2024-04-15 16:15:49.521039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.664 [2024-04-15 16:15:49.521258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:76176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.664 [2024-04-15 16:15:49.521396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.664 [2024-04-15 16:15:49.521684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:101784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.664 [2024-04-15 16:15:49.521745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.664 [2024-04-15 16:15:49.521802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:86888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.664 [2024-04-15 16:15:49.521990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.664 [2024-04-15 16:15:49.522049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.664 [2024-04-15 16:15:49.522104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.664 [2024-04-15 16:15:49.522162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:71712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.664 [2024-04-15 16:15:49.522338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.664 [2024-04-15 16:15:49.522398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:34016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.664 [2024-04-15 16:15:49.522454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.664 [2024-04-15 16:15:49.522603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:124104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.664 [2024-04-15 16:15:49.522674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.664 [2024-04-15 16:15:49.522730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:83256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.664 [2024-04-15 16:15:49.522868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.664 [2024-04-15 16:15:49.522925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:103376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.664 [2024-04-15 16:15:49.523124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.664 [2024-04-15 16:15:49.523304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:106736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.664 [2024-04-15 16:15:49.523430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.664 [2024-04-15 16:15:49.523560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:36248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.664 [2024-04-15 16:15:49.523687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.664 [2024-04-15 16:15:49.523826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:35312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.664 [2024-04-15 16:15:49.523995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.664 [2024-04-15 16:15:49.524124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:49344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.664 [2024-04-15 16:15:49.524314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.664 [2024-04-15 16:15:49.524376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:58800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.664 [2024-04-15 16:15:49.524504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.664 [2024-04-15 16:15:49.524566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:29360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.664 [2024-04-15 16:15:49.524720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.664 [2024-04-15 16:15:49.524874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:80336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.664 [2024-04-15 16:15:49.524935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.664 [2024-04-15 16:15:49.525132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.664 [2024-04-15 16:15:49.525214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.664 [2024-04-15 16:15:49.525273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.664 [2024-04-15 16:15:49.525331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.664 [2024-04-15 16:15:49.525483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:58968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.664 [2024-04-15 16:15:49.525553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.664 [2024-04-15 16:15:49.525654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:128264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.664 [2024-04-15 16:15:49.525804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.664 [2024-04-15 16:15:49.525866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:68016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.664 [2024-04-15 16:15:49.526009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.664 [2024-04-15 16:15:49.526072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:67656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.664 [2024-04-15 16:15:49.526210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.664 [2024-04-15 16:15:49.526287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:79936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.664 [2024-04-15 16:15:49.526398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.664 [2024-04-15 16:15:49.526472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.664 [2024-04-15 16:15:49.526602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.664 [2024-04-15 16:15:49.526676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.664 [2024-04-15 16:15:49.526744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.664 [2024-04-15 16:15:49.526873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.664 [2024-04-15 16:15:49.526932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.664 [2024-04-15 16:15:49.527062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:50616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.664 [2024-04-15 16:15:49.527129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.664 [2024-04-15 16:15:49.527266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:54656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.664 [2024-04-15 16:15:49.527392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.664 [2024-04-15 16:15:49.527593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:24848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.664 [2024-04-15 16:15:49.527677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.664 [2024-04-15 16:15:49.527810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:92608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.664 [2024-04-15 16:15:49.527871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.664 [2024-04-15 16:15:49.527999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:51592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.664 [2024-04-15 16:15:49.528060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.664 [2024-04-15 16:15:49.528195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:45704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.664 [2024-04-15 16:15:49.528286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.664 [2024-04-15 16:15:49.528513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.664 [2024-04-15 16:15:49.528658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.664 [2024-04-15 16:15:49.528816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:24768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.664 [2024-04-15 16:15:49.528969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.664 [2024-04-15 16:15:49.529117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:44008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.664 [2024-04-15 16:15:49.529260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.664 [2024-04-15 16:15:49.529446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:117360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.664 [2024-04-15 16:15:49.529597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.664 [2024-04-15 16:15:49.529756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:93448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.664 [2024-04-15 16:15:49.529905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.664 [2024-04-15 16:15:49.530066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:16192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.665 [2024-04-15 16:15:49.530217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-04-15 16:15:49.530370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:94880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.665 [2024-04-15 16:15:49.530509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-04-15 16:15:49.530682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.665 [2024-04-15 16:15:49.530845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-04-15 16:15:49.531001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.665 [2024-04-15 16:15:49.531149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-04-15 16:15:49.531303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:84920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.665 [2024-04-15 16:15:49.531455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-04-15 16:15:49.531626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:53496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.665 [2024-04-15 16:15:49.531768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-04-15 16:15:49.531920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:92936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.665 [2024-04-15 16:15:49.532083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-04-15 16:15:49.532243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:27560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.665 [2024-04-15 16:15:49.532408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-04-15 16:15:49.532567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.665 [2024-04-15 16:15:49.532748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-04-15 16:15:49.532895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:128952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.665 [2024-04-15 16:15:49.533044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-04-15 16:15:49.533233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:87928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.665 [2024-04-15 16:15:49.533403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-04-15 16:15:49.533553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:78712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.665 [2024-04-15 16:15:49.533767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-04-15 16:15:49.533956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:26464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.665 [2024-04-15 16:15:49.534117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-04-15 16:15:49.534270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.665 [2024-04-15 16:15:49.534421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-04-15 16:15:49.534588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:116320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.665 [2024-04-15 16:15:49.534747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-04-15 16:15:49.534888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:127776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.665 [2024-04-15 16:15:49.535039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-04-15 16:15:49.535205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:29400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.665 [2024-04-15 16:15:49.535358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-04-15 16:15:49.535522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:46312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.665 [2024-04-15 16:15:49.535686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-04-15 16:15:49.535835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:128336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.665 [2024-04-15 16:15:49.535970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-04-15 16:15:49.536128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:80672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.665 [2024-04-15 16:15:49.536278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-04-15 16:15:49.536424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:34008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.665 [2024-04-15 16:15:49.536584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-04-15 16:15:49.536734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:61544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.665 [2024-04-15 16:15:49.536879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-04-15 16:15:49.537035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.665 [2024-04-15 16:15:49.537182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-04-15 16:15:49.537340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:61832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.665 [2024-04-15 16:15:49.537476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-04-15 16:15:49.537497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:37680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.665 [2024-04-15 16:15:49.537512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-04-15 16:15:49.537528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:15952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.665 [2024-04-15 16:15:49.537542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-04-15 16:15:49.537558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:82336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.665 [2024-04-15 16:15:49.537587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-04-15 16:15:49.537604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:98776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.665 [2024-04-15 16:15:49.537618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-04-15 16:15:49.537647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.665 [2024-04-15 16:15:49.537658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-04-15 16:15:49.537671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:111344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.665 [2024-04-15 16:15:49.537681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-04-15 16:15:49.537694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.665 [2024-04-15 16:15:49.537705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-04-15 16:15:49.537718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:93448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.665 [2024-04-15 16:15:49.537728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-04-15 16:15:49.537741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:76808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.665 [2024-04-15 16:15:49.537751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-04-15 16:15:49.537764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:65560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.665 [2024-04-15 16:15:49.537774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-04-15 16:15:49.537787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:124792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.665 [2024-04-15 16:15:49.537797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-04-15 16:15:49.537810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:39328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.665 [2024-04-15 16:15:49.537820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-04-15 16:15:49.537835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:98768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.665 [2024-04-15 16:15:49.537845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-04-15 16:15:49.537858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:66800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.665 [2024-04-15 16:15:49.537868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-04-15 16:15:49.537881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:55296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.665 [2024-04-15 16:15:49.537892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.665 [2024-04-15 16:15:49.537904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:123824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.665 [2024-04-15 16:15:49.537914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-04-15 16:15:49.537929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:76424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-04-15 16:15:49.537944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-04-15 16:15:49.537964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:97872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-04-15 16:15:49.537981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-04-15 16:15:49.537993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:27128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-04-15 16:15:49.538004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-04-15 16:15:49.538016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:76976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-04-15 16:15:49.538027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-04-15 16:15:49.538039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:97160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-04-15 16:15:49.538050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-04-15 16:15:49.538063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-04-15 16:15:49.538073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-04-15 16:15:49.538087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:54656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-04-15 16:15:49.538098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-04-15 16:15:49.538111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:60336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-04-15 16:15:49.538121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-04-15 16:15:49.538133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:37832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-04-15 16:15:49.538145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-04-15 16:15:49.538158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:13608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-04-15 16:15:49.538169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-04-15 16:15:49.538181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:74832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-04-15 16:15:49.538192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-04-15 16:15:49.538204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:105768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-04-15 16:15:49.538215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-04-15 16:15:49.538227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:19360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-04-15 16:15:49.538238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-04-15 16:15:49.538250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:77736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-04-15 16:15:49.538261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-04-15 16:15:49.538273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:8552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-04-15 16:15:49.538284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-04-15 16:15:49.538296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:95008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-04-15 16:15:49.538306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-04-15 16:15:49.538319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:25840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-04-15 16:15:49.538329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-04-15 16:15:49.538342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:104632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-04-15 16:15:49.538352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-04-15 16:15:49.538365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:43408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-04-15 16:15:49.538375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-04-15 16:15:49.538388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:25880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-04-15 16:15:49.538398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-04-15 16:15:49.538411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:47912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-04-15 16:15:49.538422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-04-15 16:15:49.538435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:61600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-04-15 16:15:49.538445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-04-15 16:15:49.538459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:96968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-04-15 16:15:49.538470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-04-15 16:15:49.538482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:73752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-04-15 16:15:49.538493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-04-15 16:15:49.538505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:45856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-04-15 16:15:49.538516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-04-15 16:15:49.538528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:13600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-04-15 16:15:49.538539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-04-15 16:15:49.538551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:107736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-04-15 16:15:49.538562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-04-15 16:15:49.538585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:13992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-04-15 16:15:49.538596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-04-15 16:15:49.538609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:52040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-04-15 16:15:49.538619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-04-15 16:15:49.538632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:78208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-04-15 16:15:49.538643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-04-15 16:15:49.538655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:104656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-04-15 16:15:49.538666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-04-15 16:15:49.538678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:86216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-04-15 16:15:49.538689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-04-15 16:15:49.538701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:27512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-04-15 16:15:49.538711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-04-15 16:15:49.538724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:52872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-04-15 16:15:49.538734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-04-15 16:15:49.538747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:14680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-04-15 16:15:49.538757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-04-15 16:15:49.538773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:72704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-04-15 16:15:49.538789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-04-15 16:15:49.538809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:87320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-04-15 16:15:49.538824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-04-15 16:15:49.538837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:25192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-04-15 16:15:49.538847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-04-15 16:15:49.538860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:35472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.666 [2024-04-15 16:15:49.538870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.666 [2024-04-15 16:15:49.538890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:129872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.667 [2024-04-15 16:15:49.538908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.667 [2024-04-15 16:15:49.538924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:119568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.667 [2024-04-15 16:15:49.538934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.667 [2024-04-15 16:15:49.538947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:98560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.667 [2024-04-15 16:15:49.538957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.667 [2024-04-15 16:15:49.538970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:82456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.667 [2024-04-15 16:15:49.538981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.667 [2024-04-15 16:15:49.538994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:128824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.667 [2024-04-15 16:15:49.539004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.667 [2024-04-15 16:15:49.539017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:68032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.667 [2024-04-15 16:15:49.539027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.667 [2024-04-15 16:15:49.539045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:84864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.667 [2024-04-15 16:15:49.539059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.667 [2024-04-15 16:15:49.539075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:71456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.667 [2024-04-15 16:15:49.539093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.667 [2024-04-15 16:15:49.539113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:101824 len: 16:15:49 -- host/timeout.sh@128 -- # wait 93608 00:23:19.667 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.667 [2024-04-15 16:15:49.539133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.667 [2024-04-15 16:15:49.539152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:75536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:19.667 [2024-04-15 16:15:49.539168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.667 [2024-04-15 16:15:49.539187] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142e850 is same with the state(5) to be set 00:23:19.667 [2024-04-15 16:15:49.539212] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:19.667 [2024-04-15 16:15:49.539227] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:19.667 [2024-04-15 16:15:49.539244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57528 len:8 PRP1 0x0 PRP2 0x0 00:23:19.667 [2024-04-15 16:15:49.539263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.667 [2024-04-15 16:15:49.539357] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x142e850 was disconnected and freed. reset controller. 00:23:19.667 [2024-04-15 16:15:49.539484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.667 [2024-04-15 16:15:49.539502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.667 [2024-04-15 16:15:49.539518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.667 [2024-04-15 16:15:49.539532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.667 [2024-04-15 16:15:49.539546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.667 [2024-04-15 16:15:49.539560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.667 [2024-04-15 16:15:49.539587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.667 [2024-04-15 16:15:49.539602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.667 [2024-04-15 16:15:49.539615] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1424040 is same with the state(5) to be set 00:23:19.667 [2024-04-15 16:15:49.539893] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:19.667 [2024-04-15 16:15:49.539921] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1424040 (9): Bad file descriptor 00:23:19.667 [2024-04-15 16:15:49.540062] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.667 [2024-04-15 16:15:49.540127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.667 [2024-04-15 16:15:49.540166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.667 [2024-04-15 16:15:49.540183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1424040 with addr=10.0.0.2, port=4420 00:23:19.667 [2024-04-15 16:15:49.540201] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1424040 is same with the state(5) to be set 00:23:19.667 [2024-04-15 16:15:49.540230] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1424040 (9): Bad file descriptor 00:23:19.667 [2024-04-15 16:15:49.540250] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:19.667 [2024-04-15 16:15:49.540263] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:19.667 [2024-04-15 16:15:49.540276] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:19.667 [2024-04-15 16:15:49.540299] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:19.667 [2024-04-15 16:15:49.540312] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:22.198 [2024-04-15 16:15:51.540489] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:22.198 [2024-04-15 16:15:51.540792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:22.198 [2024-04-15 16:15:51.540986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:22.198 [2024-04-15 16:15:51.541110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1424040 with addr=10.0.0.2, port=4420 00:23:22.198 [2024-04-15 16:15:51.541309] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1424040 is same with the state(5) to be set 00:23:22.198 [2024-04-15 16:15:51.541474] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1424040 (9): Bad file descriptor 00:23:22.198 [2024-04-15 16:15:51.541745] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:22.198 [2024-04-15 16:15:51.541817] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:22.198 [2024-04-15 16:15:51.541952] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:22.198 [2024-04-15 16:15:51.542018] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:22.198 [2024-04-15 16:15:51.542058] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:24.097 [2024-04-15 16:15:53.542382] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.097 [2024-04-15 16:15:53.542715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.097 [2024-04-15 16:15:53.542902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.097 [2024-04-15 16:15:53.542957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1424040 with addr=10.0.0.2, port=4420 00:23:24.097 [2024-04-15 16:15:53.543197] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1424040 is same with the state(5) to be set 00:23:24.097 [2024-04-15 16:15:53.543278] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1424040 (9): Bad file descriptor 00:23:24.097 [2024-04-15 16:15:53.543435] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:24.097 [2024-04-15 16:15:53.543494] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:24.097 [2024-04-15 16:15:53.543611] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:24.097 [2024-04-15 16:15:53.543704] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:24.097 [2024-04-15 16:15:53.543744] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:26.002 [2024-04-15 16:15:55.543887] bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:26.616 00:23:26.616 Latency(us) 00:23:26.616 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:26.616 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:23:26.616 NVMe0n1 : 8.12 2065.12 8.07 15.77 0.00 61447.44 7957.94 7030452.42 00:23:26.616 =================================================================================================================== 00:23:26.616 Total : 2065.12 8.07 15.77 0.00 61447.44 7957.94 7030452.42 00:23:26.616 0 00:23:26.616 16:15:56 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:26.616 Attaching 5 probes... 00:23:26.616 1263.717594: reset bdev controller NVMe0 00:23:26.616 1263.796332: reconnect bdev controller NVMe0 00:23:26.616 3264.200057: reconnect delay bdev controller NVMe0 00:23:26.616 3264.223045: reconnect bdev controller NVMe0 00:23:26.616 5266.081073: reconnect delay bdev controller NVMe0 00:23:26.616 5266.105206: reconnect bdev controller NVMe0 00:23:26.616 7267.681674: reconnect delay bdev controller NVMe0 00:23:26.616 7267.706759: reconnect bdev controller NVMe0 00:23:26.616 16:15:56 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:23:26.616 16:15:56 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:23:26.616 16:15:56 -- host/timeout.sh@136 -- # kill 93561 00:23:26.616 16:15:56 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:26.616 16:15:56 -- host/timeout.sh@139 -- # killprocess 93558 00:23:26.616 16:15:56 -- common/autotest_common.sh@936 -- # '[' -z 93558 ']' 00:23:26.616 16:15:56 -- common/autotest_common.sh@940 -- # kill -0 93558 00:23:26.616 16:15:56 -- common/autotest_common.sh@941 -- # uname 00:23:26.616 16:15:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:26.616 16:15:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93558 00:23:26.874 16:15:56 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:23:26.874 killing process with pid 93558 00:23:26.874 Received shutdown signal, test time was about 8.177810 seconds 00:23:26.874 00:23:26.874 Latency(us) 00:23:26.874 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:26.874 =================================================================================================================== 00:23:26.874 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:26.874 16:15:56 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:23:26.874 16:15:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93558' 00:23:26.874 16:15:56 -- common/autotest_common.sh@955 -- # kill 93558 00:23:26.874 16:15:56 -- common/autotest_common.sh@960 -- # wait 93558 00:23:26.874 16:15:56 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:27.155 16:15:57 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:23:27.155 16:15:57 -- host/timeout.sh@145 -- # nvmftestfini 00:23:27.155 16:15:57 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:27.156 16:15:57 -- nvmf/common.sh@117 -- # sync 00:23:27.414 16:15:57 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:27.414 16:15:57 -- nvmf/common.sh@120 -- # set +e 00:23:27.414 16:15:57 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:27.414 16:15:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:27.414 rmmod nvme_tcp 00:23:27.414 rmmod nvme_fabrics 00:23:27.414 16:15:57 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:27.414 16:15:57 -- nvmf/common.sh@124 -- # set -e 00:23:27.414 16:15:57 -- nvmf/common.sh@125 -- # return 0 00:23:27.414 16:15:57 -- nvmf/common.sh@478 -- # '[' -n 93127 ']' 00:23:27.414 16:15:57 -- nvmf/common.sh@479 -- # killprocess 93127 00:23:27.414 16:15:57 -- common/autotest_common.sh@936 -- # '[' -z 93127 ']' 00:23:27.414 16:15:57 -- common/autotest_common.sh@940 -- # kill -0 93127 00:23:27.414 16:15:57 -- common/autotest_common.sh@941 -- # uname 00:23:27.414 16:15:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:27.414 16:15:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93127 00:23:27.414 16:15:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:27.414 16:15:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:27.414 16:15:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93127' 00:23:27.414 killing process with pid 93127 00:23:27.414 16:15:57 -- common/autotest_common.sh@955 -- # kill 93127 00:23:27.414 16:15:57 -- common/autotest_common.sh@960 -- # wait 93127 00:23:27.707 16:15:57 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:27.707 16:15:57 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:27.707 16:15:57 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:27.707 16:15:57 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:27.707 16:15:57 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:27.707 16:15:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:27.707 16:15:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:27.707 16:15:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:27.707 16:15:57 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:27.707 ************************************ 00:23:27.707 END TEST nvmf_timeout 00:23:27.707 ************************************ 00:23:27.707 00:23:27.707 real 0m45.896s 00:23:27.707 user 2m13.612s 00:23:27.707 sys 0m6.386s 00:23:27.707 16:15:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:27.707 16:15:57 -- common/autotest_common.sh@10 -- # set +x 00:23:27.707 16:15:57 -- nvmf/nvmf.sh@118 -- # [[ virt == phy ]] 00:23:27.707 16:15:57 -- nvmf/nvmf.sh@123 -- # timing_exit host 00:23:27.707 16:15:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:27.707 16:15:57 -- common/autotest_common.sh@10 -- # set +x 00:23:27.707 16:15:57 -- nvmf/nvmf.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:23:27.707 00:23:27.707 real 11m17.369s 00:23:27.707 user 29m36.354s 00:23:27.707 sys 3m50.970s 00:23:27.707 ************************************ 00:23:27.707 END TEST nvmf_tcp 00:23:27.707 ************************************ 00:23:27.707 16:15:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:27.707 16:15:57 -- common/autotest_common.sh@10 -- # set +x 00:23:27.707 16:15:57 -- spdk/autotest.sh@286 -- # [[ 1 -eq 0 ]] 00:23:27.707 16:15:57 -- spdk/autotest.sh@290 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:23:27.707 16:15:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:27.707 16:15:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:27.707 16:15:57 -- common/autotest_common.sh@10 -- # set +x 00:23:27.969 ************************************ 00:23:27.969 START TEST nvmf_dif 00:23:27.969 ************************************ 00:23:27.969 16:15:57 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:23:27.969 * Looking for test storage... 00:23:27.969 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:27.969 16:15:57 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:27.969 16:15:57 -- nvmf/common.sh@7 -- # uname -s 00:23:27.969 16:15:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:27.969 16:15:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:27.969 16:15:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:27.969 16:15:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:27.969 16:15:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:27.969 16:15:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:27.969 16:15:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:27.969 16:15:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:27.969 16:15:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:27.969 16:15:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:27.969 16:15:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5cf30adc-e0ee-4255-9853-178d7d30983a 00:23:27.969 16:15:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=5cf30adc-e0ee-4255-9853-178d7d30983a 00:23:27.969 16:15:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:27.969 16:15:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:27.969 16:15:57 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:27.969 16:15:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:27.969 16:15:57 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:27.969 16:15:57 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:27.969 16:15:57 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:27.969 16:15:57 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:27.969 16:15:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.969 16:15:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.969 16:15:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.969 16:15:57 -- paths/export.sh@5 -- # export PATH 00:23:27.969 16:15:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.969 16:15:57 -- nvmf/common.sh@47 -- # : 0 00:23:27.969 16:15:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:27.969 16:15:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:27.969 16:15:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:27.969 16:15:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:27.969 16:15:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:27.969 16:15:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:27.969 16:15:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:27.969 16:15:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:27.969 16:15:57 -- target/dif.sh@15 -- # NULL_META=16 00:23:27.969 16:15:57 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:23:27.969 16:15:57 -- target/dif.sh@15 -- # NULL_SIZE=64 00:23:27.969 16:15:57 -- target/dif.sh@15 -- # NULL_DIF=1 00:23:27.969 16:15:57 -- target/dif.sh@135 -- # nvmftestinit 00:23:27.969 16:15:57 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:27.969 16:15:57 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:27.969 16:15:57 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:27.969 16:15:57 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:27.969 16:15:57 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:27.969 16:15:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:27.969 16:15:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:27.969 16:15:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:27.969 16:15:57 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:23:27.969 16:15:57 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:23:27.969 16:15:57 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:23:27.969 16:15:57 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:23:27.969 16:15:57 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:23:27.969 16:15:57 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:23:27.969 16:15:57 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:27.969 16:15:57 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:27.969 16:15:57 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:27.969 16:15:57 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:27.969 16:15:57 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:27.969 16:15:57 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:27.969 16:15:57 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:27.969 16:15:57 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:27.969 16:15:57 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:27.969 16:15:57 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:27.969 16:15:57 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:27.969 16:15:57 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:27.969 16:15:57 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:27.969 16:15:57 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:27.969 Cannot find device "nvmf_tgt_br" 00:23:27.969 16:15:57 -- nvmf/common.sh@155 -- # true 00:23:27.969 16:15:57 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:27.969 Cannot find device "nvmf_tgt_br2" 00:23:27.969 16:15:57 -- nvmf/common.sh@156 -- # true 00:23:27.969 16:15:57 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:27.969 16:15:57 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:27.969 Cannot find device "nvmf_tgt_br" 00:23:27.969 16:15:57 -- nvmf/common.sh@158 -- # true 00:23:27.969 16:15:57 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:27.969 Cannot find device "nvmf_tgt_br2" 00:23:27.969 16:15:57 -- nvmf/common.sh@159 -- # true 00:23:27.969 16:15:57 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:28.322 16:15:57 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:28.322 16:15:57 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:28.322 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:28.322 16:15:57 -- nvmf/common.sh@162 -- # true 00:23:28.322 16:15:57 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:28.322 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:28.322 16:15:57 -- nvmf/common.sh@163 -- # true 00:23:28.322 16:15:57 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:28.322 16:15:57 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:28.322 16:15:57 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:28.322 16:15:58 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:28.322 16:15:58 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:28.322 16:15:58 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:28.322 16:15:58 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:28.322 16:15:58 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:28.322 16:15:58 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:28.322 16:15:58 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:28.322 16:15:58 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:28.322 16:15:58 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:28.322 16:15:58 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:28.322 16:15:58 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:28.322 16:15:58 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:28.322 16:15:58 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:28.322 16:15:58 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:28.322 16:15:58 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:28.322 16:15:58 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:28.322 16:15:58 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:28.322 16:15:58 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:28.322 16:15:58 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:28.322 16:15:58 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:28.322 16:15:58 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:28.322 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:28.322 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:23:28.322 00:23:28.322 --- 10.0.0.2 ping statistics --- 00:23:28.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:28.322 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:23:28.322 16:15:58 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:28.322 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:28.322 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:23:28.322 00:23:28.322 --- 10.0.0.3 ping statistics --- 00:23:28.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:28.322 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:23:28.322 16:15:58 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:28.322 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:28.322 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:23:28.322 00:23:28.322 --- 10.0.0.1 ping statistics --- 00:23:28.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:28.322 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:23:28.322 16:15:58 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:28.322 16:15:58 -- nvmf/common.sh@422 -- # return 0 00:23:28.322 16:15:58 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:23:28.322 16:15:58 -- nvmf/common.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:28.890 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:28.890 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:28.890 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:28.890 16:15:58 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:28.890 16:15:58 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:28.890 16:15:58 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:28.890 16:15:58 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:28.890 16:15:58 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:28.890 16:15:58 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:28.890 16:15:58 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:23:28.890 16:15:58 -- target/dif.sh@137 -- # nvmfappstart 00:23:28.890 16:15:58 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:28.890 16:15:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:28.890 16:15:58 -- common/autotest_common.sh@10 -- # set +x 00:23:28.890 16:15:58 -- nvmf/common.sh@470 -- # nvmfpid=94048 00:23:28.890 16:15:58 -- nvmf/common.sh@471 -- # waitforlisten 94048 00:23:28.890 16:15:58 -- common/autotest_common.sh@817 -- # '[' -z 94048 ']' 00:23:28.890 16:15:58 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:28.890 16:15:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:28.890 16:15:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:28.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:28.890 16:15:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:28.890 16:15:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:28.890 16:15:58 -- common/autotest_common.sh@10 -- # set +x 00:23:28.890 [2024-04-15 16:15:58.740839] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:23:28.890 [2024-04-15 16:15:58.740936] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:29.149 [2024-04-15 16:15:58.890487] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.149 [2024-04-15 16:15:58.946190] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:29.149 [2024-04-15 16:15:58.946252] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:29.149 [2024-04-15 16:15:58.946268] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:29.149 [2024-04-15 16:15:58.946281] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:29.149 [2024-04-15 16:15:58.946292] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:29.149 [2024-04-15 16:15:58.946330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:30.085 16:15:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:30.085 16:15:59 -- common/autotest_common.sh@850 -- # return 0 00:23:30.085 16:15:59 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:30.085 16:15:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:30.085 16:15:59 -- common/autotest_common.sh@10 -- # set +x 00:23:30.085 16:15:59 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:30.085 16:15:59 -- target/dif.sh@139 -- # create_transport 00:23:30.085 16:15:59 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:23:30.085 16:15:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.085 16:15:59 -- common/autotest_common.sh@10 -- # set +x 00:23:30.085 [2024-04-15 16:15:59.870484] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:30.085 16:15:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.085 16:15:59 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:23:30.085 16:15:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:30.085 16:15:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:30.085 16:15:59 -- common/autotest_common.sh@10 -- # set +x 00:23:30.085 ************************************ 00:23:30.085 START TEST fio_dif_1_default 00:23:30.085 ************************************ 00:23:30.085 16:15:59 -- common/autotest_common.sh@1111 -- # fio_dif_1 00:23:30.085 16:15:59 -- target/dif.sh@86 -- # create_subsystems 0 00:23:30.085 16:15:59 -- target/dif.sh@28 -- # local sub 00:23:30.085 16:15:59 -- target/dif.sh@30 -- # for sub in "$@" 00:23:30.085 16:15:59 -- target/dif.sh@31 -- # create_subsystem 0 00:23:30.085 16:15:59 -- target/dif.sh@18 -- # local sub_id=0 00:23:30.085 16:15:59 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:30.085 16:15:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.085 16:15:59 -- common/autotest_common.sh@10 -- # set +x 00:23:30.085 bdev_null0 00:23:30.085 16:15:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.085 16:15:59 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:30.085 16:15:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.085 16:15:59 -- common/autotest_common.sh@10 -- # set +x 00:23:30.085 16:15:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.085 16:15:59 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:30.085 16:15:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.085 16:15:59 -- common/autotest_common.sh@10 -- # set +x 00:23:30.085 16:15:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.085 16:15:59 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:30.085 16:15:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.085 16:15:59 -- common/autotest_common.sh@10 -- # set +x 00:23:30.085 [2024-04-15 16:15:59.994785] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:30.085 16:15:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.085 16:15:59 -- target/dif.sh@87 -- # fio /dev/fd/62 00:23:30.085 16:15:59 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:23:30.085 16:16:00 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:30.085 16:16:00 -- nvmf/common.sh@521 -- # config=() 00:23:30.085 16:16:00 -- nvmf/common.sh@521 -- # local subsystem config 00:23:30.085 16:16:00 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:30.085 16:16:00 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:30.085 { 00:23:30.085 "params": { 00:23:30.085 "name": "Nvme$subsystem", 00:23:30.085 "trtype": "$TEST_TRANSPORT", 00:23:30.085 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:30.085 "adrfam": "ipv4", 00:23:30.085 "trsvcid": "$NVMF_PORT", 00:23:30.085 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:30.085 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:30.085 "hdgst": ${hdgst:-false}, 00:23:30.085 "ddgst": ${ddgst:-false} 00:23:30.085 }, 00:23:30.085 "method": "bdev_nvme_attach_controller" 00:23:30.085 } 00:23:30.085 EOF 00:23:30.085 )") 00:23:30.085 16:16:00 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:30.085 16:16:00 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:30.085 16:16:00 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:23:30.085 16:16:00 -- target/dif.sh@82 -- # gen_fio_conf 00:23:30.085 16:16:00 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:30.085 16:16:00 -- target/dif.sh@54 -- # local file 00:23:30.085 16:16:00 -- common/autotest_common.sh@1325 -- # local sanitizers 00:23:30.085 16:16:00 -- target/dif.sh@56 -- # cat 00:23:30.085 16:16:00 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:30.085 16:16:00 -- common/autotest_common.sh@1327 -- # shift 00:23:30.085 16:16:00 -- nvmf/common.sh@543 -- # cat 00:23:30.085 16:16:00 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:23:30.085 16:16:00 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:23:30.085 16:16:00 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:30.085 16:16:00 -- common/autotest_common.sh@1331 -- # grep libasan 00:23:30.085 16:16:00 -- target/dif.sh@72 -- # (( file = 1 )) 00:23:30.085 16:16:00 -- target/dif.sh@72 -- # (( file <= files )) 00:23:30.085 16:16:00 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:23:30.085 16:16:00 -- nvmf/common.sh@545 -- # jq . 00:23:30.085 16:16:00 -- nvmf/common.sh@546 -- # IFS=, 00:23:30.085 16:16:00 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:23:30.085 "params": { 00:23:30.085 "name": "Nvme0", 00:23:30.085 "trtype": "tcp", 00:23:30.085 "traddr": "10.0.0.2", 00:23:30.085 "adrfam": "ipv4", 00:23:30.085 "trsvcid": "4420", 00:23:30.085 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:30.085 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:30.085 "hdgst": false, 00:23:30.085 "ddgst": false 00:23:30.085 }, 00:23:30.085 "method": "bdev_nvme_attach_controller" 00:23:30.085 }' 00:23:30.085 16:16:00 -- common/autotest_common.sh@1331 -- # asan_lib= 00:23:30.085 16:16:00 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:23:30.085 16:16:00 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:23:30.085 16:16:00 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:30.085 16:16:00 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:23:30.085 16:16:00 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:23:30.343 16:16:00 -- common/autotest_common.sh@1331 -- # asan_lib= 00:23:30.343 16:16:00 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:23:30.343 16:16:00 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:30.343 16:16:00 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:30.343 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:30.343 fio-3.35 00:23:30.343 Starting 1 thread 00:23:30.601 [2024-04-15 16:16:00.559077] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:23:30.601 [2024-04-15 16:16:00.559795] rpc.c: 167:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:23:42.802 00:23:42.802 filename0: (groupid=0, jobs=1): err= 0: pid=94120: Mon Apr 15 16:16:10 2024 00:23:42.802 read: IOPS=10.1k, BW=39.5MiB/s (41.4MB/s)(395MiB/10001msec) 00:23:42.802 slat (usec): min=5, max=778, avg= 7.12, stdev= 2.92 00:23:42.802 clat (usec): min=301, max=2196, avg=376.22, stdev=30.63 00:23:42.802 lat (usec): min=307, max=2230, avg=383.34, stdev=31.13 00:23:42.802 clat percentiles (usec): 00:23:42.802 | 1.00th=[ 318], 5.00th=[ 338], 10.00th=[ 347], 20.00th=[ 355], 00:23:42.802 | 30.00th=[ 363], 40.00th=[ 371], 50.00th=[ 375], 60.00th=[ 383], 00:23:42.802 | 70.00th=[ 388], 80.00th=[ 396], 90.00th=[ 408], 95.00th=[ 416], 00:23:42.802 | 99.00th=[ 437], 99.50th=[ 457], 99.90th=[ 537], 99.95th=[ 570], 00:23:42.802 | 99.99th=[ 1336] 00:23:42.802 bw ( KiB/s): min=37920, max=42048, per=99.89%, avg=40382.32, stdev=1097.81, samples=19 00:23:42.802 iops : min= 9480, max=10512, avg=10095.58, stdev=274.45, samples=19 00:23:42.802 lat (usec) : 500=99.77%, 750=0.21%, 1000=0.01% 00:23:42.802 lat (msec) : 2=0.01%, 4=0.01% 00:23:42.802 cpu : usr=83.10%, sys=15.44%, ctx=128, majf=0, minf=0 00:23:42.802 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:42.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.802 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.802 issued rwts: total=101072,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:42.802 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:42.802 00:23:42.802 Run status group 0 (all jobs): 00:23:42.802 READ: bw=39.5MiB/s (41.4MB/s), 39.5MiB/s-39.5MiB/s (41.4MB/s-41.4MB/s), io=395MiB (414MB), run=10001-10001msec 00:23:42.802 16:16:10 -- target/dif.sh@88 -- # destroy_subsystems 0 00:23:42.802 16:16:10 -- target/dif.sh@43 -- # local sub 00:23:42.802 16:16:10 -- target/dif.sh@45 -- # for sub in "$@" 00:23:42.802 16:16:10 -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:42.802 16:16:10 -- target/dif.sh@36 -- # local sub_id=0 00:23:42.802 16:16:10 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:42.802 16:16:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:42.802 16:16:10 -- common/autotest_common.sh@10 -- # set +x 00:23:42.802 16:16:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:42.802 16:16:10 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:42.802 16:16:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:42.802 16:16:10 -- common/autotest_common.sh@10 -- # set +x 00:23:42.802 16:16:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:42.802 00:23:42.802 real 0m10.934s 00:23:42.802 user 0m8.875s 00:23:42.802 sys 0m1.832s 00:23:42.802 ************************************ 00:23:42.802 END TEST fio_dif_1_default 00:23:42.802 ************************************ 00:23:42.802 16:16:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:42.802 16:16:10 -- common/autotest_common.sh@10 -- # set +x 00:23:42.802 16:16:10 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:23:42.802 16:16:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:42.802 16:16:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:42.802 16:16:10 -- common/autotest_common.sh@10 -- # set +x 00:23:42.802 ************************************ 00:23:42.802 START TEST fio_dif_1_multi_subsystems 00:23:42.802 ************************************ 00:23:42.802 16:16:11 -- common/autotest_common.sh@1111 -- # fio_dif_1_multi_subsystems 00:23:42.803 16:16:11 -- target/dif.sh@92 -- # local files=1 00:23:42.803 16:16:11 -- target/dif.sh@94 -- # create_subsystems 0 1 00:23:42.803 16:16:11 -- target/dif.sh@28 -- # local sub 00:23:42.803 16:16:11 -- target/dif.sh@30 -- # for sub in "$@" 00:23:42.803 16:16:11 -- target/dif.sh@31 -- # create_subsystem 0 00:23:42.803 16:16:11 -- target/dif.sh@18 -- # local sub_id=0 00:23:42.803 16:16:11 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:42.803 16:16:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:42.803 16:16:11 -- common/autotest_common.sh@10 -- # set +x 00:23:42.803 bdev_null0 00:23:42.803 16:16:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:42.803 16:16:11 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:42.803 16:16:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:42.803 16:16:11 -- common/autotest_common.sh@10 -- # set +x 00:23:42.803 16:16:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:42.803 16:16:11 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:42.803 16:16:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:42.803 16:16:11 -- common/autotest_common.sh@10 -- # set +x 00:23:42.803 16:16:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:42.803 16:16:11 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:42.803 16:16:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:42.803 16:16:11 -- common/autotest_common.sh@10 -- # set +x 00:23:42.803 [2024-04-15 16:16:11.039814] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:42.803 16:16:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:42.803 16:16:11 -- target/dif.sh@30 -- # for sub in "$@" 00:23:42.803 16:16:11 -- target/dif.sh@31 -- # create_subsystem 1 00:23:42.803 16:16:11 -- target/dif.sh@18 -- # local sub_id=1 00:23:42.803 16:16:11 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:23:42.803 16:16:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:42.803 16:16:11 -- common/autotest_common.sh@10 -- # set +x 00:23:42.803 bdev_null1 00:23:42.803 16:16:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:42.803 16:16:11 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:42.803 16:16:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:42.803 16:16:11 -- common/autotest_common.sh@10 -- # set +x 00:23:42.803 16:16:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:42.803 16:16:11 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:42.803 16:16:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:42.803 16:16:11 -- common/autotest_common.sh@10 -- # set +x 00:23:42.803 16:16:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:42.803 16:16:11 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:42.803 16:16:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:42.803 16:16:11 -- common/autotest_common.sh@10 -- # set +x 00:23:42.803 16:16:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:42.803 16:16:11 -- target/dif.sh@95 -- # fio /dev/fd/62 00:23:42.803 16:16:11 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:23:42.803 16:16:11 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:23:42.803 16:16:11 -- nvmf/common.sh@521 -- # config=() 00:23:42.803 16:16:11 -- nvmf/common.sh@521 -- # local subsystem config 00:23:42.803 16:16:11 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:42.803 16:16:11 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:42.803 16:16:11 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:42.803 16:16:11 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:42.803 { 00:23:42.803 "params": { 00:23:42.803 "name": "Nvme$subsystem", 00:23:42.803 "trtype": "$TEST_TRANSPORT", 00:23:42.803 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:42.803 "adrfam": "ipv4", 00:23:42.803 "trsvcid": "$NVMF_PORT", 00:23:42.803 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:42.803 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:42.803 "hdgst": ${hdgst:-false}, 00:23:42.803 "ddgst": ${ddgst:-false} 00:23:42.803 }, 00:23:42.803 "method": "bdev_nvme_attach_controller" 00:23:42.803 } 00:23:42.803 EOF 00:23:42.803 )") 00:23:42.803 16:16:11 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:23:42.803 16:16:11 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:42.803 16:16:11 -- target/dif.sh@82 -- # gen_fio_conf 00:23:42.803 16:16:11 -- common/autotest_common.sh@1325 -- # local sanitizers 00:23:42.803 16:16:11 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:42.803 16:16:11 -- common/autotest_common.sh@1327 -- # shift 00:23:42.803 16:16:11 -- target/dif.sh@54 -- # local file 00:23:42.803 16:16:11 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:23:42.803 16:16:11 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:23:42.803 16:16:11 -- target/dif.sh@56 -- # cat 00:23:42.803 16:16:11 -- nvmf/common.sh@543 -- # cat 00:23:42.803 16:16:11 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:42.803 16:16:11 -- target/dif.sh@72 -- # (( file = 1 )) 00:23:42.803 16:16:11 -- target/dif.sh@72 -- # (( file <= files )) 00:23:42.803 16:16:11 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:23:42.803 16:16:11 -- target/dif.sh@73 -- # cat 00:23:42.803 16:16:11 -- common/autotest_common.sh@1331 -- # grep libasan 00:23:42.803 16:16:11 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:42.803 16:16:11 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:42.803 { 00:23:42.803 "params": { 00:23:42.803 "name": "Nvme$subsystem", 00:23:42.803 "trtype": "$TEST_TRANSPORT", 00:23:42.803 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:42.803 "adrfam": "ipv4", 00:23:42.803 "trsvcid": "$NVMF_PORT", 00:23:42.803 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:42.803 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:42.803 "hdgst": ${hdgst:-false}, 00:23:42.803 "ddgst": ${ddgst:-false} 00:23:42.803 }, 00:23:42.803 "method": "bdev_nvme_attach_controller" 00:23:42.803 } 00:23:42.803 EOF 00:23:42.803 )") 00:23:42.803 16:16:11 -- nvmf/common.sh@543 -- # cat 00:23:42.803 16:16:11 -- target/dif.sh@72 -- # (( file++ )) 00:23:42.803 16:16:11 -- target/dif.sh@72 -- # (( file <= files )) 00:23:42.803 16:16:11 -- nvmf/common.sh@545 -- # jq . 00:23:42.803 16:16:11 -- nvmf/common.sh@546 -- # IFS=, 00:23:42.803 16:16:11 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:23:42.803 "params": { 00:23:42.803 "name": "Nvme0", 00:23:42.803 "trtype": "tcp", 00:23:42.803 "traddr": "10.0.0.2", 00:23:42.803 "adrfam": "ipv4", 00:23:42.803 "trsvcid": "4420", 00:23:42.803 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:42.803 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:42.803 "hdgst": false, 00:23:42.803 "ddgst": false 00:23:42.803 }, 00:23:42.803 "method": "bdev_nvme_attach_controller" 00:23:42.803 },{ 00:23:42.803 "params": { 00:23:42.803 "name": "Nvme1", 00:23:42.803 "trtype": "tcp", 00:23:42.803 "traddr": "10.0.0.2", 00:23:42.803 "adrfam": "ipv4", 00:23:42.803 "trsvcid": "4420", 00:23:42.803 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:42.803 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:42.803 "hdgst": false, 00:23:42.803 "ddgst": false 00:23:42.803 }, 00:23:42.803 "method": "bdev_nvme_attach_controller" 00:23:42.803 }' 00:23:42.803 16:16:11 -- common/autotest_common.sh@1331 -- # asan_lib= 00:23:42.803 16:16:11 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:23:42.803 16:16:11 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:23:42.803 16:16:11 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:42.803 16:16:11 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:23:42.803 16:16:11 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:23:42.803 16:16:11 -- common/autotest_common.sh@1331 -- # asan_lib= 00:23:42.803 16:16:11 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:23:42.803 16:16:11 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:42.803 16:16:11 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:42.803 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:42.803 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:42.803 fio-3.35 00:23:42.803 Starting 2 threads 00:23:42.803 [2024-04-15 16:16:11.712897] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:23:42.803 [2024-04-15 16:16:11.713876] rpc.c: 167:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:23:52.786 00:23:52.786 filename0: (groupid=0, jobs=1): err= 0: pid=94283: Mon Apr 15 16:16:21 2024 00:23:52.786 read: IOPS=5060, BW=19.8MiB/s (20.7MB/s)(198MiB/10001msec) 00:23:52.786 slat (usec): min=6, max=156, avg=14.37, stdev= 3.84 00:23:52.786 clat (usec): min=404, max=7345, avg=751.16, stdev=71.29 00:23:52.786 lat (usec): min=413, max=7367, avg=765.53, stdev=71.57 00:23:52.786 clat percentiles (usec): 00:23:52.786 | 1.00th=[ 676], 5.00th=[ 701], 10.00th=[ 709], 20.00th=[ 725], 00:23:52.786 | 30.00th=[ 734], 40.00th=[ 742], 50.00th=[ 750], 60.00th=[ 758], 00:23:52.786 | 70.00th=[ 766], 80.00th=[ 775], 90.00th=[ 791], 95.00th=[ 807], 00:23:52.786 | 99.00th=[ 848], 99.50th=[ 881], 99.90th=[ 979], 99.95th=[ 1029], 00:23:52.786 | 99.99th=[ 2442] 00:23:52.786 bw ( KiB/s): min=19712, max=20576, per=50.00%, avg=20259.37, stdev=213.84, samples=19 00:23:52.786 iops : min= 4928, max= 5144, avg=5064.84, stdev=53.46, samples=19 00:23:52.786 lat (usec) : 500=0.03%, 750=51.88%, 1000=48.01% 00:23:52.786 lat (msec) : 2=0.06%, 4=0.01%, 10=0.01% 00:23:52.786 cpu : usr=89.44%, sys=9.29%, ctx=118, majf=0, minf=0 00:23:52.786 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:52.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.786 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.786 issued rwts: total=50608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:52.786 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:52.786 filename1: (groupid=0, jobs=1): err= 0: pid=94284: Mon Apr 15 16:16:21 2024 00:23:52.786 read: IOPS=5068, BW=19.8MiB/s (20.8MB/s)(198MiB/10001msec) 00:23:52.786 slat (usec): min=3, max=104, avg=13.77, stdev= 3.63 00:23:52.786 clat (usec): min=387, max=1588, avg=752.32, stdev=40.53 00:23:52.786 lat (usec): min=396, max=1614, avg=766.09, stdev=41.08 00:23:52.786 clat percentiles (usec): 00:23:52.786 | 1.00th=[ 660], 5.00th=[ 693], 10.00th=[ 709], 20.00th=[ 725], 00:23:52.786 | 30.00th=[ 734], 40.00th=[ 742], 50.00th=[ 750], 60.00th=[ 758], 00:23:52.786 | 70.00th=[ 775], 80.00th=[ 783], 90.00th=[ 799], 95.00th=[ 816], 00:23:52.786 | 99.00th=[ 857], 99.50th=[ 881], 99.90th=[ 938], 99.95th=[ 1004], 00:23:52.786 | 99.99th=[ 1057] 00:23:52.786 bw ( KiB/s): min=20024, max=20608, per=50.09%, avg=20294.32, stdev=168.88, samples=19 00:23:52.786 iops : min= 5006, max= 5152, avg=5073.68, stdev=42.26, samples=19 00:23:52.786 lat (usec) : 500=0.17%, 750=47.50%, 1000=52.28% 00:23:52.786 lat (msec) : 2=0.05% 00:23:52.786 cpu : usr=89.01%, sys=9.83%, ctx=181, majf=0, minf=9 00:23:52.786 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:52.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.786 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.786 issued rwts: total=50688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:52.786 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:52.786 00:23:52.786 Run status group 0 (all jobs): 00:23:52.786 READ: bw=39.6MiB/s (41.5MB/s), 19.8MiB/s-19.8MiB/s (20.7MB/s-20.8MB/s), io=396MiB (415MB), run=10001-10001msec 00:23:52.786 16:16:22 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:23:52.786 16:16:22 -- target/dif.sh@43 -- # local sub 00:23:52.786 16:16:22 -- target/dif.sh@45 -- # for sub in "$@" 00:23:52.786 16:16:22 -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:52.786 16:16:22 -- target/dif.sh@36 -- # local sub_id=0 00:23:52.786 16:16:22 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:52.786 16:16:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.786 16:16:22 -- common/autotest_common.sh@10 -- # set +x 00:23:52.786 16:16:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.786 16:16:22 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:52.786 16:16:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.786 16:16:22 -- common/autotest_common.sh@10 -- # set +x 00:23:52.786 16:16:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.786 16:16:22 -- target/dif.sh@45 -- # for sub in "$@" 00:23:52.786 16:16:22 -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:52.786 16:16:22 -- target/dif.sh@36 -- # local sub_id=1 00:23:52.786 16:16:22 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:52.786 16:16:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.786 16:16:22 -- common/autotest_common.sh@10 -- # set +x 00:23:52.786 16:16:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.786 16:16:22 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:52.786 16:16:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.786 16:16:22 -- common/autotest_common.sh@10 -- # set +x 00:23:52.786 16:16:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.786 00:23:52.786 real 0m11.047s 00:23:52.786 user 0m18.515s 00:23:52.786 sys 0m2.200s 00:23:52.786 16:16:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:52.786 16:16:22 -- common/autotest_common.sh@10 -- # set +x 00:23:52.786 ************************************ 00:23:52.786 END TEST fio_dif_1_multi_subsystems 00:23:52.786 ************************************ 00:23:52.786 16:16:22 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:23:52.786 16:16:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:52.786 16:16:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:52.786 16:16:22 -- common/autotest_common.sh@10 -- # set +x 00:23:52.786 ************************************ 00:23:52.786 START TEST fio_dif_rand_params 00:23:52.786 ************************************ 00:23:52.786 16:16:22 -- common/autotest_common.sh@1111 -- # fio_dif_rand_params 00:23:52.786 16:16:22 -- target/dif.sh@100 -- # local NULL_DIF 00:23:52.786 16:16:22 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:23:52.786 16:16:22 -- target/dif.sh@103 -- # NULL_DIF=3 00:23:52.786 16:16:22 -- target/dif.sh@103 -- # bs=128k 00:23:52.786 16:16:22 -- target/dif.sh@103 -- # numjobs=3 00:23:52.786 16:16:22 -- target/dif.sh@103 -- # iodepth=3 00:23:52.786 16:16:22 -- target/dif.sh@103 -- # runtime=5 00:23:52.786 16:16:22 -- target/dif.sh@105 -- # create_subsystems 0 00:23:52.786 16:16:22 -- target/dif.sh@28 -- # local sub 00:23:52.786 16:16:22 -- target/dif.sh@30 -- # for sub in "$@" 00:23:52.786 16:16:22 -- target/dif.sh@31 -- # create_subsystem 0 00:23:52.786 16:16:22 -- target/dif.sh@18 -- # local sub_id=0 00:23:52.786 16:16:22 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:23:52.786 16:16:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.786 16:16:22 -- common/autotest_common.sh@10 -- # set +x 00:23:52.786 bdev_null0 00:23:52.786 16:16:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.786 16:16:22 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:52.786 16:16:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.786 16:16:22 -- common/autotest_common.sh@10 -- # set +x 00:23:52.786 16:16:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.786 16:16:22 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:52.786 16:16:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.786 16:16:22 -- common/autotest_common.sh@10 -- # set +x 00:23:52.786 16:16:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.786 16:16:22 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:52.786 16:16:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.786 16:16:22 -- common/autotest_common.sh@10 -- # set +x 00:23:52.786 [2024-04-15 16:16:22.220518] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:52.786 16:16:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.786 16:16:22 -- target/dif.sh@106 -- # fio /dev/fd/62 00:23:52.786 16:16:22 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:23:52.786 16:16:22 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:52.786 16:16:22 -- nvmf/common.sh@521 -- # config=() 00:23:52.786 16:16:22 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:52.786 16:16:22 -- nvmf/common.sh@521 -- # local subsystem config 00:23:52.786 16:16:22 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:52.786 16:16:22 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:52.786 16:16:22 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:52.786 { 00:23:52.786 "params": { 00:23:52.786 "name": "Nvme$subsystem", 00:23:52.786 "trtype": "$TEST_TRANSPORT", 00:23:52.786 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:52.786 "adrfam": "ipv4", 00:23:52.786 "trsvcid": "$NVMF_PORT", 00:23:52.786 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:52.786 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:52.786 "hdgst": ${hdgst:-false}, 00:23:52.786 "ddgst": ${ddgst:-false} 00:23:52.786 }, 00:23:52.786 "method": "bdev_nvme_attach_controller" 00:23:52.786 } 00:23:52.786 EOF 00:23:52.786 )") 00:23:52.786 16:16:22 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:23:52.786 16:16:22 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:52.786 16:16:22 -- common/autotest_common.sh@1325 -- # local sanitizers 00:23:52.786 16:16:22 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:52.786 16:16:22 -- common/autotest_common.sh@1327 -- # shift 00:23:52.786 16:16:22 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:23:52.786 16:16:22 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:23:52.786 16:16:22 -- target/dif.sh@82 -- # gen_fio_conf 00:23:52.786 16:16:22 -- target/dif.sh@54 -- # local file 00:23:52.786 16:16:22 -- nvmf/common.sh@543 -- # cat 00:23:52.786 16:16:22 -- target/dif.sh@56 -- # cat 00:23:52.786 16:16:22 -- common/autotest_common.sh@1331 -- # grep libasan 00:23:52.786 16:16:22 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:52.786 16:16:22 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:23:52.786 16:16:22 -- target/dif.sh@72 -- # (( file = 1 )) 00:23:52.786 16:16:22 -- target/dif.sh@72 -- # (( file <= files )) 00:23:52.786 16:16:22 -- nvmf/common.sh@545 -- # jq . 00:23:52.786 16:16:22 -- nvmf/common.sh@546 -- # IFS=, 00:23:52.786 16:16:22 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:23:52.786 "params": { 00:23:52.786 "name": "Nvme0", 00:23:52.786 "trtype": "tcp", 00:23:52.786 "traddr": "10.0.0.2", 00:23:52.786 "adrfam": "ipv4", 00:23:52.786 "trsvcid": "4420", 00:23:52.786 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:52.786 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:52.786 "hdgst": false, 00:23:52.786 "ddgst": false 00:23:52.786 }, 00:23:52.786 "method": "bdev_nvme_attach_controller" 00:23:52.786 }' 00:23:52.786 16:16:22 -- common/autotest_common.sh@1331 -- # asan_lib= 00:23:52.786 16:16:22 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:23:52.786 16:16:22 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:23:52.786 16:16:22 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:52.786 16:16:22 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:23:52.786 16:16:22 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:23:52.786 16:16:22 -- common/autotest_common.sh@1331 -- # asan_lib= 00:23:52.786 16:16:22 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:23:52.786 16:16:22 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:52.786 16:16:22 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:52.786 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:23:52.786 ... 00:23:52.786 fio-3.35 00:23:52.786 Starting 3 threads 00:23:53.044 [2024-04-15 16:16:22.792933] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:23:53.045 [2024-04-15 16:16:22.792998] rpc.c: 167:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:23:58.321 00:23:58.321 filename0: (groupid=0, jobs=1): err= 0: pid=94450: Mon Apr 15 16:16:27 2024 00:23:58.321 read: IOPS=276, BW=34.5MiB/s (36.2MB/s)(173MiB/5005msec) 00:23:58.321 slat (nsec): min=6649, max=50497, avg=19003.25, stdev=5955.75 00:23:58.321 clat (usec): min=10146, max=12777, avg=10811.56, stdev=282.32 00:23:58.321 lat (usec): min=10161, max=12803, avg=10830.57, stdev=283.17 00:23:58.321 clat percentiles (usec): 00:23:58.321 | 1.00th=[10290], 5.00th=[10421], 10.00th=[10421], 20.00th=[10552], 00:23:58.321 | 30.00th=[10683], 40.00th=[10683], 50.00th=[10814], 60.00th=[10945], 00:23:58.321 | 70.00th=[10945], 80.00th=[11076], 90.00th=[11076], 95.00th=[11207], 00:23:58.321 | 99.00th=[11469], 99.50th=[11863], 99.90th=[12780], 99.95th=[12780], 00:23:58.321 | 99.99th=[12780] 00:23:58.321 bw ( KiB/s): min=34560, max=36864, per=33.39%, avg=35413.33, stdev=712.67, samples=9 00:23:58.321 iops : min= 270, max= 288, avg=276.67, stdev= 5.57, samples=9 00:23:58.321 lat (msec) : 20=100.00% 00:23:58.321 cpu : usr=88.81%, sys=10.31%, ctx=41, majf=0, minf=0 00:23:58.321 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:58.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:58.321 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:58.321 issued rwts: total=1383,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:58.321 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:58.321 filename0: (groupid=0, jobs=1): err= 0: pid=94451: Mon Apr 15 16:16:27 2024 00:23:58.321 read: IOPS=276, BW=34.5MiB/s (36.2MB/s)(173MiB/5004msec) 00:23:58.321 slat (nsec): min=6944, max=48204, avg=18651.42, stdev=6490.51 00:23:58.321 clat (usec): min=9120, max=13166, avg=10809.30, stdev=299.78 00:23:58.321 lat (usec): min=9128, max=13187, avg=10827.95, stdev=300.46 00:23:58.321 clat percentiles (usec): 00:23:58.321 | 1.00th=[10290], 5.00th=[10421], 10.00th=[10421], 20.00th=[10552], 00:23:58.321 | 30.00th=[10683], 40.00th=[10683], 50.00th=[10814], 60.00th=[10945], 00:23:58.321 | 70.00th=[10945], 80.00th=[11076], 90.00th=[11076], 95.00th=[11207], 00:23:58.321 | 99.00th=[11600], 99.50th=[11731], 99.90th=[13173], 99.95th=[13173], 00:23:58.321 | 99.99th=[13173] 00:23:58.321 bw ( KiB/s): min=34560, max=36096, per=33.39%, avg=35413.33, stdev=461.51, samples=9 00:23:58.321 iops : min= 270, max= 282, avg=276.67, stdev= 3.61, samples=9 00:23:58.321 lat (msec) : 10=0.22%, 20=99.78% 00:23:58.321 cpu : usr=88.63%, sys=10.41%, ctx=104, majf=0, minf=0 00:23:58.321 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:58.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:58.321 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:58.321 issued rwts: total=1383,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:58.321 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:58.321 filename0: (groupid=0, jobs=1): err= 0: pid=94452: Mon Apr 15 16:16:27 2024 00:23:58.321 read: IOPS=276, BW=34.5MiB/s (36.2MB/s)(173MiB/5008msec) 00:23:58.321 slat (nsec): min=3836, max=53139, avg=18416.98, stdev=6309.16 00:23:58.321 clat (usec): min=10138, max=16055, avg=10819.44, stdev=362.73 00:23:58.321 lat (usec): min=10151, max=16075, avg=10837.85, stdev=363.08 00:23:58.321 clat percentiles (usec): 00:23:58.321 | 1.00th=[10290], 5.00th=[10421], 10.00th=[10421], 20.00th=[10552], 00:23:58.321 | 30.00th=[10683], 40.00th=[10683], 50.00th=[10814], 60.00th=[10945], 00:23:58.321 | 70.00th=[10945], 80.00th=[11076], 90.00th=[11076], 95.00th=[11207], 00:23:58.321 | 99.00th=[11469], 99.50th=[11863], 99.90th=[16057], 99.95th=[16057], 00:23:58.321 | 99.99th=[16057] 00:23:58.321 bw ( KiB/s): min=34560, max=36096, per=33.31%, avg=35328.00, stdev=627.07, samples=10 00:23:58.321 iops : min= 270, max= 282, avg=276.00, stdev= 4.90, samples=10 00:23:58.321 lat (msec) : 20=100.00% 00:23:58.321 cpu : usr=88.76%, sys=10.33%, ctx=68, majf=0, minf=0 00:23:58.321 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:58.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:58.321 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:58.321 issued rwts: total=1383,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:58.321 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:58.321 00:23:58.321 Run status group 0 (all jobs): 00:23:58.321 READ: bw=104MiB/s (109MB/s), 34.5MiB/s-34.5MiB/s (36.2MB/s-36.2MB/s), io=519MiB (544MB), run=5004-5008msec 00:23:58.321 16:16:28 -- target/dif.sh@107 -- # destroy_subsystems 0 00:23:58.321 16:16:28 -- target/dif.sh@43 -- # local sub 00:23:58.321 16:16:28 -- target/dif.sh@45 -- # for sub in "$@" 00:23:58.321 16:16:28 -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:58.321 16:16:28 -- target/dif.sh@36 -- # local sub_id=0 00:23:58.321 16:16:28 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:58.321 16:16:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.321 16:16:28 -- common/autotest_common.sh@10 -- # set +x 00:23:58.321 16:16:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.321 16:16:28 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:58.321 16:16:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.321 16:16:28 -- common/autotest_common.sh@10 -- # set +x 00:23:58.321 16:16:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.321 16:16:28 -- target/dif.sh@109 -- # NULL_DIF=2 00:23:58.321 16:16:28 -- target/dif.sh@109 -- # bs=4k 00:23:58.321 16:16:28 -- target/dif.sh@109 -- # numjobs=8 00:23:58.321 16:16:28 -- target/dif.sh@109 -- # iodepth=16 00:23:58.321 16:16:28 -- target/dif.sh@109 -- # runtime= 00:23:58.321 16:16:28 -- target/dif.sh@109 -- # files=2 00:23:58.321 16:16:28 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:23:58.321 16:16:28 -- target/dif.sh@28 -- # local sub 00:23:58.321 16:16:28 -- target/dif.sh@30 -- # for sub in "$@" 00:23:58.321 16:16:28 -- target/dif.sh@31 -- # create_subsystem 0 00:23:58.321 16:16:28 -- target/dif.sh@18 -- # local sub_id=0 00:23:58.321 16:16:28 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:23:58.321 16:16:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.321 16:16:28 -- common/autotest_common.sh@10 -- # set +x 00:23:58.321 bdev_null0 00:23:58.321 16:16:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.321 16:16:28 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:58.321 16:16:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.321 16:16:28 -- common/autotest_common.sh@10 -- # set +x 00:23:58.321 16:16:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.321 16:16:28 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:58.321 16:16:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.321 16:16:28 -- common/autotest_common.sh@10 -- # set +x 00:23:58.321 16:16:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.321 16:16:28 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:58.321 16:16:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.321 16:16:28 -- common/autotest_common.sh@10 -- # set +x 00:23:58.321 [2024-04-15 16:16:28.160613] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:58.321 16:16:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.321 16:16:28 -- target/dif.sh@30 -- # for sub in "$@" 00:23:58.321 16:16:28 -- target/dif.sh@31 -- # create_subsystem 1 00:23:58.321 16:16:28 -- target/dif.sh@18 -- # local sub_id=1 00:23:58.321 16:16:28 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:23:58.321 16:16:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.321 16:16:28 -- common/autotest_common.sh@10 -- # set +x 00:23:58.321 bdev_null1 00:23:58.321 16:16:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.321 16:16:28 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:58.321 16:16:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.321 16:16:28 -- common/autotest_common.sh@10 -- # set +x 00:23:58.321 16:16:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.321 16:16:28 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:58.321 16:16:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.321 16:16:28 -- common/autotest_common.sh@10 -- # set +x 00:23:58.321 16:16:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.321 16:16:28 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:58.321 16:16:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.321 16:16:28 -- common/autotest_common.sh@10 -- # set +x 00:23:58.321 16:16:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.321 16:16:28 -- target/dif.sh@30 -- # for sub in "$@" 00:23:58.321 16:16:28 -- target/dif.sh@31 -- # create_subsystem 2 00:23:58.321 16:16:28 -- target/dif.sh@18 -- # local sub_id=2 00:23:58.321 16:16:28 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:23:58.321 16:16:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.321 16:16:28 -- common/autotest_common.sh@10 -- # set +x 00:23:58.321 bdev_null2 00:23:58.321 16:16:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.321 16:16:28 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:23:58.321 16:16:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.321 16:16:28 -- common/autotest_common.sh@10 -- # set +x 00:23:58.321 16:16:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.321 16:16:28 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:23:58.321 16:16:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.321 16:16:28 -- common/autotest_common.sh@10 -- # set +x 00:23:58.321 16:16:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.321 16:16:28 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:58.321 16:16:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.321 16:16:28 -- common/autotest_common.sh@10 -- # set +x 00:23:58.322 16:16:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.322 16:16:28 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:23:58.322 16:16:28 -- target/dif.sh@112 -- # fio /dev/fd/62 00:23:58.322 16:16:28 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:23:58.322 16:16:28 -- nvmf/common.sh@521 -- # config=() 00:23:58.322 16:16:28 -- nvmf/common.sh@521 -- # local subsystem config 00:23:58.322 16:16:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:58.322 16:16:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:58.322 { 00:23:58.322 "params": { 00:23:58.322 "name": "Nvme$subsystem", 00:23:58.322 "trtype": "$TEST_TRANSPORT", 00:23:58.322 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.322 "adrfam": "ipv4", 00:23:58.322 "trsvcid": "$NVMF_PORT", 00:23:58.322 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.322 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.322 "hdgst": ${hdgst:-false}, 00:23:58.322 "ddgst": ${ddgst:-false} 00:23:58.322 }, 00:23:58.322 "method": "bdev_nvme_attach_controller" 00:23:58.322 } 00:23:58.322 EOF 00:23:58.322 )") 00:23:58.322 16:16:28 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:58.322 16:16:28 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:58.322 16:16:28 -- target/dif.sh@82 -- # gen_fio_conf 00:23:58.322 16:16:28 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:23:58.322 16:16:28 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:58.322 16:16:28 -- target/dif.sh@54 -- # local file 00:23:58.322 16:16:28 -- nvmf/common.sh@543 -- # cat 00:23:58.322 16:16:28 -- common/autotest_common.sh@1325 -- # local sanitizers 00:23:58.322 16:16:28 -- target/dif.sh@56 -- # cat 00:23:58.322 16:16:28 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:58.322 16:16:28 -- common/autotest_common.sh@1327 -- # shift 00:23:58.322 16:16:28 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:23:58.322 16:16:28 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:23:58.322 16:16:28 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:58.322 16:16:28 -- common/autotest_common.sh@1331 -- # grep libasan 00:23:58.322 16:16:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:58.322 16:16:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:58.322 { 00:23:58.322 "params": { 00:23:58.322 "name": "Nvme$subsystem", 00:23:58.322 "trtype": "$TEST_TRANSPORT", 00:23:58.322 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.322 "adrfam": "ipv4", 00:23:58.322 "trsvcid": "$NVMF_PORT", 00:23:58.322 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.322 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.322 "hdgst": ${hdgst:-false}, 00:23:58.322 "ddgst": ${ddgst:-false} 00:23:58.322 }, 00:23:58.322 "method": "bdev_nvme_attach_controller" 00:23:58.322 } 00:23:58.322 EOF 00:23:58.322 )") 00:23:58.322 16:16:28 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:23:58.322 16:16:28 -- target/dif.sh@72 -- # (( file = 1 )) 00:23:58.322 16:16:28 -- target/dif.sh@72 -- # (( file <= files )) 00:23:58.322 16:16:28 -- target/dif.sh@73 -- # cat 00:23:58.322 16:16:28 -- nvmf/common.sh@543 -- # cat 00:23:58.322 16:16:28 -- target/dif.sh@72 -- # (( file++ )) 00:23:58.322 16:16:28 -- target/dif.sh@72 -- # (( file <= files )) 00:23:58.322 16:16:28 -- target/dif.sh@73 -- # cat 00:23:58.322 16:16:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:58.322 16:16:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:58.322 { 00:23:58.322 "params": { 00:23:58.322 "name": "Nvme$subsystem", 00:23:58.322 "trtype": "$TEST_TRANSPORT", 00:23:58.322 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.322 "adrfam": "ipv4", 00:23:58.322 "trsvcid": "$NVMF_PORT", 00:23:58.322 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.322 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.322 "hdgst": ${hdgst:-false}, 00:23:58.322 "ddgst": ${ddgst:-false} 00:23:58.322 }, 00:23:58.322 "method": "bdev_nvme_attach_controller" 00:23:58.322 } 00:23:58.322 EOF 00:23:58.322 )") 00:23:58.322 16:16:28 -- target/dif.sh@72 -- # (( file++ )) 00:23:58.322 16:16:28 -- target/dif.sh@72 -- # (( file <= files )) 00:23:58.322 16:16:28 -- nvmf/common.sh@543 -- # cat 00:23:58.322 16:16:28 -- nvmf/common.sh@545 -- # jq . 00:23:58.322 16:16:28 -- nvmf/common.sh@546 -- # IFS=, 00:23:58.322 16:16:28 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:23:58.322 "params": { 00:23:58.322 "name": "Nvme0", 00:23:58.322 "trtype": "tcp", 00:23:58.322 "traddr": "10.0.0.2", 00:23:58.322 "adrfam": "ipv4", 00:23:58.322 "trsvcid": "4420", 00:23:58.322 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:58.322 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:58.322 "hdgst": false, 00:23:58.322 "ddgst": false 00:23:58.322 }, 00:23:58.322 "method": "bdev_nvme_attach_controller" 00:23:58.322 },{ 00:23:58.322 "params": { 00:23:58.322 "name": "Nvme1", 00:23:58.322 "trtype": "tcp", 00:23:58.322 "traddr": "10.0.0.2", 00:23:58.322 "adrfam": "ipv4", 00:23:58.322 "trsvcid": "4420", 00:23:58.322 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:58.322 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:58.322 "hdgst": false, 00:23:58.322 "ddgst": false 00:23:58.322 }, 00:23:58.322 "method": "bdev_nvme_attach_controller" 00:23:58.322 },{ 00:23:58.322 "params": { 00:23:58.322 "name": "Nvme2", 00:23:58.322 "trtype": "tcp", 00:23:58.322 "traddr": "10.0.0.2", 00:23:58.322 "adrfam": "ipv4", 00:23:58.322 "trsvcid": "4420", 00:23:58.322 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:58.322 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:58.322 "hdgst": false, 00:23:58.322 "ddgst": false 00:23:58.322 }, 00:23:58.322 "method": "bdev_nvme_attach_controller" 00:23:58.322 }' 00:23:58.322 16:16:28 -- common/autotest_common.sh@1331 -- # asan_lib= 00:23:58.322 16:16:28 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:23:58.322 16:16:28 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:23:58.322 16:16:28 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:23:58.322 16:16:28 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:58.322 16:16:28 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:23:58.580 16:16:28 -- common/autotest_common.sh@1331 -- # asan_lib= 00:23:58.580 16:16:28 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:23:58.580 16:16:28 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:58.580 16:16:28 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:58.581 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:58.581 ... 00:23:58.581 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:58.581 ... 00:23:58.581 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:58.581 ... 00:23:58.581 fio-3.35 00:23:58.581 Starting 24 threads 00:23:59.147 [2024-04-15 16:16:28.912628] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:23:59.147 [2024-04-15 16:16:28.912687] rpc.c: 167:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:24:11.346 00:24:11.346 filename0: (groupid=0, jobs=1): err= 0: pid=94547: Mon Apr 15 16:16:39 2024 00:24:11.346 read: IOPS=197, BW=791KiB/s (810kB/s)(7916KiB/10009msec) 00:24:11.346 slat (usec): min=3, max=18028, avg=22.92, stdev=404.99 00:24:11.346 clat (msec): min=29, max=162, avg=80.78, stdev=23.30 00:24:11.346 lat (msec): min=29, max=162, avg=80.81, stdev=23.31 00:24:11.346 clat percentiles (msec): 00:24:11.346 | 1.00th=[ 34], 5.00th=[ 54], 10.00th=[ 54], 20.00th=[ 56], 00:24:11.346 | 30.00th=[ 62], 40.00th=[ 75], 50.00th=[ 81], 60.00th=[ 82], 00:24:11.346 | 70.00th=[ 93], 80.00th=[ 108], 90.00th=[ 109], 95.00th=[ 117], 00:24:11.346 | 99.00th=[ 136], 99.50th=[ 136], 99.90th=[ 163], 99.95th=[ 163], 00:24:11.346 | 99.99th=[ 163] 00:24:11.346 bw ( KiB/s): min= 528, max= 1064, per=4.00%, avg=787.65, stdev=164.40, samples=20 00:24:11.346 iops : min= 132, max= 266, avg=196.90, stdev=41.09, samples=20 00:24:11.346 lat (msec) : 50=3.13%, 100=70.84%, 250=26.02% 00:24:11.346 cpu : usr=30.81%, sys=2.35%, ctx=379, majf=0, minf=9 00:24:11.346 IO depths : 1=0.2%, 2=1.0%, 4=3.4%, 8=78.8%, 16=16.6%, 32=0.0%, >=64=0.0% 00:24:11.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.346 complete : 0=0.0%, 4=89.0%, 8=10.2%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.346 issued rwts: total=1979,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.346 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.346 filename0: (groupid=0, jobs=1): err= 0: pid=94548: Mon Apr 15 16:16:39 2024 00:24:11.346 read: IOPS=199, BW=799KiB/s (818kB/s)(8028KiB/10045msec) 00:24:11.346 slat (nsec): min=6440, max=57539, avg=11909.78, stdev=4704.94 00:24:11.346 clat (msec): min=19, max=162, avg=79.88, stdev=22.75 00:24:11.346 lat (msec): min=19, max=162, avg=79.89, stdev=22.75 00:24:11.346 clat percentiles (msec): 00:24:11.346 | 1.00th=[ 25], 5.00th=[ 43], 10.00th=[ 53], 20.00th=[ 61], 00:24:11.346 | 30.00th=[ 67], 40.00th=[ 75], 50.00th=[ 80], 60.00th=[ 83], 00:24:11.346 | 70.00th=[ 94], 80.00th=[ 105], 90.00th=[ 109], 95.00th=[ 111], 00:24:11.346 | 99.00th=[ 129], 99.50th=[ 138], 99.90th=[ 163], 99.95th=[ 163], 00:24:11.346 | 99.99th=[ 163] 00:24:11.346 bw ( KiB/s): min= 560, max= 1112, per=4.06%, avg=799.20, stdev=169.78, samples=20 00:24:11.346 iops : min= 140, max= 278, avg=199.80, stdev=42.45, samples=20 00:24:11.346 lat (msec) : 20=0.70%, 50=7.32%, 100=66.97%, 250=25.01% 00:24:11.346 cpu : usr=34.48%, sys=2.93%, ctx=672, majf=0, minf=9 00:24:11.346 IO depths : 1=0.2%, 2=0.6%, 4=1.8%, 8=80.2%, 16=17.0%, 32=0.0%, >=64=0.0% 00:24:11.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.346 complete : 0=0.0%, 4=88.7%, 8=10.7%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.346 issued rwts: total=2007,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.346 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.346 filename0: (groupid=0, jobs=1): err= 0: pid=94549: Mon Apr 15 16:16:39 2024 00:24:11.346 read: IOPS=205, BW=821KiB/s (840kB/s)(8216KiB/10011msec) 00:24:11.346 slat (usec): min=6, max=18054, avg=32.68, stdev=562.21 00:24:11.346 clat (msec): min=26, max=163, avg=77.82, stdev=22.47 00:24:11.346 lat (msec): min=26, max=163, avg=77.86, stdev=22.46 00:24:11.346 clat percentiles (msec): 00:24:11.346 | 1.00th=[ 32], 5.00th=[ 50], 10.00th=[ 53], 20.00th=[ 55], 00:24:11.346 | 30.00th=[ 60], 40.00th=[ 73], 50.00th=[ 79], 60.00th=[ 82], 00:24:11.346 | 70.00th=[ 88], 80.00th=[ 104], 90.00th=[ 108], 95.00th=[ 111], 00:24:11.346 | 99.00th=[ 131], 99.50th=[ 132], 99.90th=[ 161], 99.95th=[ 165], 00:24:11.346 | 99.99th=[ 165] 00:24:11.346 bw ( KiB/s): min= 624, max= 1083, per=4.15%, avg=817.35, stdev=153.24, samples=20 00:24:11.346 iops : min= 156, max= 270, avg=204.30, stdev=38.24, samples=20 00:24:11.346 lat (msec) : 50=5.99%, 100=71.18%, 250=22.83% 00:24:11.346 cpu : usr=34.04%, sys=2.94%, ctx=416, majf=0, minf=9 00:24:11.346 IO depths : 1=0.1%, 2=1.4%, 4=5.2%, 8=77.4%, 16=15.9%, 32=0.0%, >=64=0.0% 00:24:11.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.346 complete : 0=0.0%, 4=88.9%, 8=10.0%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.346 issued rwts: total=2054,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.346 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.346 filename0: (groupid=0, jobs=1): err= 0: pid=94550: Mon Apr 15 16:16:39 2024 00:24:11.346 read: IOPS=229, BW=917KiB/s (939kB/s)(9236KiB/10067msec) 00:24:11.346 slat (nsec): min=5191, max=51729, avg=12895.80, stdev=5471.52 00:24:11.346 clat (usec): min=1577, max=158235, avg=69584.28, stdev=32727.90 00:24:11.346 lat (usec): min=1588, max=158250, avg=69597.17, stdev=32726.87 00:24:11.346 clat percentiles (usec): 00:24:11.346 | 1.00th=[ 1663], 5.00th=[ 1795], 10.00th=[ 7439], 20.00th=[ 51119], 00:24:11.346 | 30.00th=[ 56361], 40.00th=[ 63177], 50.00th=[ 72877], 60.00th=[ 79168], 00:24:11.346 | 70.00th=[ 88605], 80.00th=[101188], 90.00th=[108528], 95.00th=[114820], 00:24:11.346 | 99.00th=[130548], 99.50th=[132645], 99.90th=[141558], 99.95th=[158335], 00:24:11.346 | 99.99th=[158335] 00:24:11.346 bw ( KiB/s): min= 528, max= 3200, per=4.66%, avg=917.20, stdev=558.88, samples=20 00:24:11.346 iops : min= 132, max= 800, avg=229.30, stdev=139.72, samples=20 00:24:11.346 lat (msec) : 2=8.06%, 4=0.26%, 10=2.77%, 20=1.30%, 50=6.71% 00:24:11.346 lat (msec) : 100=59.51%, 250=21.39% 00:24:11.346 cpu : usr=44.57%, sys=3.53%, ctx=438, majf=0, minf=0 00:24:11.346 IO depths : 1=0.7%, 2=2.6%, 4=7.6%, 8=74.0%, 16=15.1%, 32=0.0%, >=64=0.0% 00:24:11.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.346 complete : 0=0.0%, 4=89.8%, 8=8.6%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.346 issued rwts: total=2309,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.346 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.346 filename0: (groupid=0, jobs=1): err= 0: pid=94551: Mon Apr 15 16:16:39 2024 00:24:11.346 read: IOPS=204, BW=816KiB/s (836kB/s)(8180KiB/10023msec) 00:24:11.346 slat (usec): min=7, max=13062, avg=28.05, stdev=407.81 00:24:11.346 clat (msec): min=26, max=155, avg=78.26, stdev=22.97 00:24:11.346 lat (msec): min=26, max=155, avg=78.29, stdev=22.96 00:24:11.346 clat percentiles (msec): 00:24:11.346 | 1.00th=[ 36], 5.00th=[ 44], 10.00th=[ 52], 20.00th=[ 58], 00:24:11.346 | 30.00th=[ 63], 40.00th=[ 70], 50.00th=[ 78], 60.00th=[ 82], 00:24:11.346 | 70.00th=[ 93], 80.00th=[ 104], 90.00th=[ 109], 95.00th=[ 117], 00:24:11.346 | 99.00th=[ 131], 99.50th=[ 131], 99.90th=[ 136], 99.95th=[ 157], 00:24:11.346 | 99.99th=[ 157] 00:24:11.346 bw ( KiB/s): min= 584, max= 1072, per=4.14%, avg=814.00, stdev=155.45, samples=20 00:24:11.346 iops : min= 146, max= 268, avg=203.50, stdev=38.86, samples=20 00:24:11.346 lat (msec) : 50=9.05%, 100=66.89%, 250=24.06% 00:24:11.346 cpu : usr=42.15%, sys=3.36%, ctx=528, majf=0, minf=9 00:24:11.346 IO depths : 1=0.3%, 2=1.1%, 4=3.5%, 8=79.0%, 16=16.0%, 32=0.0%, >=64=0.0% 00:24:11.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.346 complete : 0=0.0%, 4=88.5%, 8=10.7%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.346 issued rwts: total=2045,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.346 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.346 filename0: (groupid=0, jobs=1): err= 0: pid=94552: Mon Apr 15 16:16:39 2024 00:24:11.346 read: IOPS=196, BW=787KiB/s (806kB/s)(7888KiB/10026msec) 00:24:11.346 slat (usec): min=4, max=18037, avg=22.29, stdev=405.93 00:24:11.346 clat (msec): min=24, max=161, avg=81.19, stdev=23.45 00:24:11.346 lat (msec): min=24, max=161, avg=81.21, stdev=23.44 00:24:11.346 clat percentiles (msec): 00:24:11.346 | 1.00th=[ 34], 5.00th=[ 53], 10.00th=[ 54], 20.00th=[ 56], 00:24:11.346 | 30.00th=[ 63], 40.00th=[ 78], 50.00th=[ 82], 60.00th=[ 84], 00:24:11.346 | 70.00th=[ 93], 80.00th=[ 106], 90.00th=[ 109], 95.00th=[ 116], 00:24:11.346 | 99.00th=[ 144], 99.50th=[ 148], 99.90th=[ 163], 99.95th=[ 163], 00:24:11.346 | 99.99th=[ 163] 00:24:11.346 bw ( KiB/s): min= 624, max= 1040, per=3.99%, avg=784.80, stdev=148.03, samples=20 00:24:11.346 iops : min= 156, max= 260, avg=196.20, stdev=37.01, samples=20 00:24:11.346 lat (msec) : 50=4.06%, 100=70.18%, 250=25.76% 00:24:11.346 cpu : usr=32.65%, sys=2.57%, ctx=399, majf=0, minf=9 00:24:11.346 IO depths : 1=0.2%, 2=1.3%, 4=5.6%, 8=76.7%, 16=16.2%, 32=0.0%, >=64=0.0% 00:24:11.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.346 complete : 0=0.0%, 4=89.5%, 8=9.1%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.346 issued rwts: total=1972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.346 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.346 filename0: (groupid=0, jobs=1): err= 0: pid=94553: Mon Apr 15 16:16:39 2024 00:24:11.346 read: IOPS=204, BW=819KiB/s (838kB/s)(8196KiB/10010msec) 00:24:11.346 slat (usec): min=7, max=13284, avg=26.12, stdev=368.69 00:24:11.346 clat (msec): min=24, max=151, avg=78.04, stdev=23.85 00:24:11.346 lat (msec): min=24, max=151, avg=78.06, stdev=23.85 00:24:11.346 clat percentiles (msec): 00:24:11.347 | 1.00th=[ 33], 5.00th=[ 42], 10.00th=[ 50], 20.00th=[ 57], 00:24:11.347 | 30.00th=[ 63], 40.00th=[ 69], 50.00th=[ 75], 60.00th=[ 82], 00:24:11.347 | 70.00th=[ 91], 80.00th=[ 103], 90.00th=[ 112], 95.00th=[ 121], 00:24:11.347 | 99.00th=[ 132], 99.50th=[ 133], 99.90th=[ 140], 99.95th=[ 153], 00:24:11.347 | 99.99th=[ 153] 00:24:11.347 bw ( KiB/s): min= 640, max= 1152, per=4.15%, avg=816.00, stdev=172.75, samples=20 00:24:11.347 iops : min= 160, max= 288, avg=204.00, stdev=43.19, samples=20 00:24:11.347 lat (msec) : 50=10.98%, 100=64.86%, 250=24.16% 00:24:11.347 cpu : usr=39.18%, sys=3.27%, ctx=601, majf=0, minf=9 00:24:11.347 IO depths : 1=0.1%, 2=1.1%, 4=4.4%, 8=78.5%, 16=15.9%, 32=0.0%, >=64=0.0% 00:24:11.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.347 complete : 0=0.0%, 4=88.6%, 8=10.4%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.347 issued rwts: total=2049,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.347 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.347 filename0: (groupid=0, jobs=1): err= 0: pid=94554: Mon Apr 15 16:16:39 2024 00:24:11.347 read: IOPS=202, BW=808KiB/s (828kB/s)(8096KiB/10018msec) 00:24:11.347 slat (nsec): min=4865, max=39295, avg=12568.90, stdev=4900.93 00:24:11.347 clat (msec): min=27, max=162, avg=79.11, stdev=23.99 00:24:11.347 lat (msec): min=27, max=162, avg=79.12, stdev=23.99 00:24:11.347 clat percentiles (msec): 00:24:11.347 | 1.00th=[ 34], 5.00th=[ 44], 10.00th=[ 50], 20.00th=[ 58], 00:24:11.347 | 30.00th=[ 64], 40.00th=[ 70], 50.00th=[ 78], 60.00th=[ 83], 00:24:11.347 | 70.00th=[ 95], 80.00th=[ 106], 90.00th=[ 110], 95.00th=[ 117], 00:24:11.347 | 99.00th=[ 136], 99.50th=[ 136], 99.90th=[ 163], 99.95th=[ 163], 00:24:11.347 | 99.99th=[ 163] 00:24:11.347 bw ( KiB/s): min= 520, max= 1160, per=4.09%, avg=805.60, stdev=185.00, samples=20 00:24:11.347 iops : min= 130, max= 290, avg=201.40, stdev=46.25, samples=20 00:24:11.347 lat (msec) : 50=10.97%, 100=63.78%, 250=25.25% 00:24:11.347 cpu : usr=34.86%, sys=3.16%, ctx=737, majf=0, minf=9 00:24:11.347 IO depths : 1=0.1%, 2=0.4%, 4=1.7%, 8=80.7%, 16=17.1%, 32=0.0%, >=64=0.0% 00:24:11.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.347 complete : 0=0.0%, 4=88.5%, 8=11.0%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.347 issued rwts: total=2024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.347 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.347 filename1: (groupid=0, jobs=1): err= 0: pid=94555: Mon Apr 15 16:16:39 2024 00:24:11.347 read: IOPS=202, BW=809KiB/s (829kB/s)(8120KiB/10035msec) 00:24:11.347 slat (nsec): min=4165, max=60739, avg=13314.01, stdev=5456.30 00:24:11.347 clat (msec): min=6, max=150, avg=78.97, stdev=23.07 00:24:11.347 lat (msec): min=6, max=150, avg=78.99, stdev=23.07 00:24:11.347 clat percentiles (msec): 00:24:11.347 | 1.00th=[ 10], 5.00th=[ 46], 10.00th=[ 54], 20.00th=[ 57], 00:24:11.347 | 30.00th=[ 64], 40.00th=[ 75], 50.00th=[ 80], 60.00th=[ 83], 00:24:11.347 | 70.00th=[ 91], 80.00th=[ 104], 90.00th=[ 108], 95.00th=[ 112], 00:24:11.347 | 99.00th=[ 130], 99.50th=[ 134], 99.90th=[ 138], 99.95th=[ 140], 00:24:11.347 | 99.99th=[ 150] 00:24:11.347 bw ( KiB/s): min= 600, max= 1280, per=4.10%, avg=806.80, stdev=164.67, samples=20 00:24:11.347 iops : min= 150, max= 320, avg=201.70, stdev=41.17, samples=20 00:24:11.347 lat (msec) : 10=1.58%, 50=4.68%, 100=68.72%, 250=25.02% 00:24:11.347 cpu : usr=33.33%, sys=2.61%, ctx=447, majf=0, minf=9 00:24:11.347 IO depths : 1=0.1%, 2=1.1%, 4=4.8%, 8=77.4%, 16=16.5%, 32=0.0%, >=64=0.0% 00:24:11.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.347 complete : 0=0.0%, 4=89.4%, 8=9.4%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.347 issued rwts: total=2030,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.347 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.347 filename1: (groupid=0, jobs=1): err= 0: pid=94556: Mon Apr 15 16:16:39 2024 00:24:11.347 read: IOPS=210, BW=843KiB/s (863kB/s)(8440KiB/10015msec) 00:24:11.347 slat (usec): min=4, max=13026, avg=23.13, stdev=312.51 00:24:11.347 clat (msec): min=28, max=129, avg=75.84, stdev=21.49 00:24:11.347 lat (msec): min=28, max=129, avg=75.87, stdev=21.50 00:24:11.347 clat percentiles (msec): 00:24:11.347 | 1.00th=[ 34], 5.00th=[ 44], 10.00th=[ 52], 20.00th=[ 56], 00:24:11.347 | 30.00th=[ 62], 40.00th=[ 69], 50.00th=[ 74], 60.00th=[ 81], 00:24:11.347 | 70.00th=[ 86], 80.00th=[ 100], 90.00th=[ 106], 95.00th=[ 112], 00:24:11.347 | 99.00th=[ 125], 99.50th=[ 125], 99.90th=[ 130], 99.95th=[ 130], 00:24:11.347 | 99.99th=[ 130] 00:24:11.347 bw ( KiB/s): min= 616, max= 1064, per=4.26%, avg=837.60, stdev=141.14, samples=20 00:24:11.347 iops : min= 154, max= 266, avg=209.40, stdev=35.29, samples=20 00:24:11.347 lat (msec) : 50=8.77%, 100=71.75%, 250=19.48% 00:24:11.347 cpu : usr=41.79%, sys=3.41%, ctx=499, majf=0, minf=9 00:24:11.347 IO depths : 1=0.1%, 2=0.4%, 4=1.5%, 8=81.4%, 16=16.6%, 32=0.0%, >=64=0.0% 00:24:11.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.347 complete : 0=0.0%, 4=88.1%, 8=11.6%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.347 issued rwts: total=2110,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.347 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.347 filename1: (groupid=0, jobs=1): err= 0: pid=94557: Mon Apr 15 16:16:39 2024 00:24:11.347 read: IOPS=204, BW=817KiB/s (836kB/s)(8212KiB/10053msec) 00:24:11.347 slat (usec): min=6, max=5046, avg=15.06, stdev=111.23 00:24:11.347 clat (msec): min=3, max=149, avg=78.19, stdev=26.21 00:24:11.347 lat (msec): min=3, max=149, avg=78.20, stdev=26.21 00:24:11.347 clat percentiles (msec): 00:24:11.347 | 1.00th=[ 7], 5.00th=[ 34], 10.00th=[ 51], 20.00th=[ 58], 00:24:11.347 | 30.00th=[ 64], 40.00th=[ 72], 50.00th=[ 79], 60.00th=[ 85], 00:24:11.347 | 70.00th=[ 95], 80.00th=[ 103], 90.00th=[ 112], 95.00th=[ 117], 00:24:11.347 | 99.00th=[ 129], 99.50th=[ 142], 99.90th=[ 148], 99.95th=[ 150], 00:24:11.347 | 99.99th=[ 150] 00:24:11.347 bw ( KiB/s): min= 544, max= 1536, per=4.14%, avg=814.80, stdev=232.91, samples=20 00:24:11.347 iops : min= 136, max= 384, avg=203.70, stdev=58.23, samples=20 00:24:11.347 lat (msec) : 4=0.78%, 10=1.56%, 20=1.46%, 50=6.14%, 100=65.22% 00:24:11.347 lat (msec) : 250=24.84% 00:24:11.347 cpu : usr=41.19%, sys=3.17%, ctx=541, majf=0, minf=9 00:24:11.347 IO depths : 1=0.1%, 2=1.9%, 4=7.6%, 8=74.5%, 16=15.8%, 32=0.0%, >=64=0.0% 00:24:11.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.347 complete : 0=0.0%, 4=89.9%, 8=8.4%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.347 issued rwts: total=2053,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.347 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.347 filename1: (groupid=0, jobs=1): err= 0: pid=94558: Mon Apr 15 16:16:39 2024 00:24:11.347 read: IOPS=199, BW=799KiB/s (818kB/s)(8008KiB/10021msec) 00:24:11.347 slat (nsec): min=6460, max=38800, avg=13126.16, stdev=5214.02 00:24:11.347 clat (msec): min=24, max=145, avg=79.97, stdev=22.37 00:24:11.347 lat (msec): min=24, max=145, avg=79.98, stdev=22.37 00:24:11.347 clat percentiles (msec): 00:24:11.347 | 1.00th=[ 33], 5.00th=[ 50], 10.00th=[ 54], 20.00th=[ 57], 00:24:11.347 | 30.00th=[ 65], 40.00th=[ 75], 50.00th=[ 81], 60.00th=[ 83], 00:24:11.347 | 70.00th=[ 95], 80.00th=[ 105], 90.00th=[ 108], 95.00th=[ 115], 00:24:11.347 | 99.00th=[ 130], 99.50th=[ 133], 99.90th=[ 146], 99.95th=[ 146], 00:24:11.347 | 99.99th=[ 146] 00:24:11.347 bw ( KiB/s): min= 632, max= 1040, per=4.05%, avg=796.80, stdev=151.18, samples=20 00:24:11.347 iops : min= 158, max= 260, avg=199.20, stdev=37.79, samples=20 00:24:11.347 lat (msec) : 50=5.99%, 100=67.13%, 250=26.87% 00:24:11.347 cpu : usr=30.69%, sys=2.39%, ctx=437, majf=0, minf=9 00:24:11.347 IO depths : 1=0.1%, 2=1.1%, 4=4.5%, 8=77.6%, 16=16.7%, 32=0.0%, >=64=0.0% 00:24:11.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.347 complete : 0=0.0%, 4=89.3%, 8=9.7%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.347 issued rwts: total=2002,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.347 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.347 filename1: (groupid=0, jobs=1): err= 0: pid=94559: Mon Apr 15 16:16:39 2024 00:24:11.347 read: IOPS=212, BW=852KiB/s (872kB/s)(8524KiB/10009msec) 00:24:11.347 slat (usec): min=6, max=13035, avg=29.52, stdev=407.01 00:24:11.347 clat (msec): min=9, max=140, avg=75.01, stdev=21.21 00:24:11.347 lat (msec): min=9, max=140, avg=75.04, stdev=21.20 00:24:11.347 clat percentiles (msec): 00:24:11.347 | 1.00th=[ 29], 5.00th=[ 44], 10.00th=[ 52], 20.00th=[ 56], 00:24:11.347 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 74], 60.00th=[ 81], 00:24:11.347 | 70.00th=[ 87], 80.00th=[ 99], 90.00th=[ 103], 95.00th=[ 110], 00:24:11.347 | 99.00th=[ 122], 99.50th=[ 125], 99.90th=[ 136], 99.95th=[ 136], 00:24:11.347 | 99.99th=[ 142] 00:24:11.347 bw ( KiB/s): min= 688, max= 1104, per=4.24%, avg=834.53, stdev=142.36, samples=19 00:24:11.347 iops : min= 172, max= 276, avg=208.63, stdev=35.59, samples=19 00:24:11.347 lat (msec) : 10=0.14%, 20=0.42%, 50=8.68%, 100=72.31%, 250=18.44% 00:24:11.347 cpu : usr=41.78%, sys=3.44%, ctx=549, majf=0, minf=9 00:24:11.347 IO depths : 1=0.1%, 2=0.4%, 4=1.5%, 8=81.6%, 16=16.5%, 32=0.0%, >=64=0.0% 00:24:11.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.347 complete : 0=0.0%, 4=87.9%, 8=11.7%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.347 issued rwts: total=2131,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.347 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.347 filename1: (groupid=0, jobs=1): err= 0: pid=94560: Mon Apr 15 16:16:39 2024 00:24:11.347 read: IOPS=208, BW=836KiB/s (856kB/s)(8372KiB/10017msec) 00:24:11.347 slat (nsec): min=3975, max=66809, avg=14140.15, stdev=5989.08 00:24:11.347 clat (msec): min=27, max=166, avg=76.48, stdev=22.32 00:24:11.347 lat (msec): min=27, max=166, avg=76.49, stdev=22.32 00:24:11.347 clat percentiles (msec): 00:24:11.347 | 1.00th=[ 34], 5.00th=[ 45], 10.00th=[ 50], 20.00th=[ 54], 00:24:11.348 | 30.00th=[ 61], 40.00th=[ 70], 50.00th=[ 77], 60.00th=[ 82], 00:24:11.348 | 70.00th=[ 88], 80.00th=[ 100], 90.00th=[ 108], 95.00th=[ 111], 00:24:11.348 | 99.00th=[ 125], 99.50th=[ 133], 99.90th=[ 167], 99.95th=[ 167], 00:24:11.348 | 99.99th=[ 167] 00:24:11.348 bw ( KiB/s): min= 640, max= 1064, per=4.22%, avg=830.85, stdev=152.37, samples=20 00:24:11.348 iops : min= 160, max= 266, avg=207.70, stdev=38.09, samples=20 00:24:11.348 lat (msec) : 50=10.70%, 100=69.33%, 250=19.97% 00:24:11.348 cpu : usr=36.80%, sys=2.76%, ctx=464, majf=0, minf=9 00:24:11.348 IO depths : 1=0.2%, 2=0.8%, 4=2.6%, 8=80.1%, 16=16.2%, 32=0.0%, >=64=0.0% 00:24:11.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.348 complete : 0=0.0%, 4=88.3%, 8=11.1%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.348 issued rwts: total=2093,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.348 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.348 filename1: (groupid=0, jobs=1): err= 0: pid=94561: Mon Apr 15 16:16:39 2024 00:24:11.348 read: IOPS=194, BW=779KiB/s (798kB/s)(7808KiB/10019msec) 00:24:11.348 slat (usec): min=4, max=14005, avg=20.83, stdev=316.74 00:24:11.348 clat (msec): min=23, max=161, avg=82.01, stdev=24.91 00:24:11.348 lat (msec): min=23, max=161, avg=82.04, stdev=24.90 00:24:11.348 clat percentiles (msec): 00:24:11.348 | 1.00th=[ 34], 5.00th=[ 51], 10.00th=[ 54], 20.00th=[ 55], 00:24:11.348 | 30.00th=[ 61], 40.00th=[ 78], 50.00th=[ 82], 60.00th=[ 84], 00:24:11.348 | 70.00th=[ 103], 80.00th=[ 108], 90.00th=[ 110], 95.00th=[ 136], 00:24:11.348 | 99.00th=[ 136], 99.50th=[ 140], 99.90th=[ 163], 99.95th=[ 163], 00:24:11.348 | 99.99th=[ 163] 00:24:11.348 bw ( KiB/s): min= 528, max= 1064, per=3.94%, avg=774.40, stdev=182.57, samples=20 00:24:11.348 iops : min= 132, max= 266, avg=193.60, stdev=45.64, samples=20 00:24:11.348 lat (msec) : 50=3.07%, 100=64.86%, 250=32.07% 00:24:11.348 cpu : usr=30.78%, sys=2.38%, ctx=385, majf=0, minf=9 00:24:11.348 IO depths : 1=0.2%, 2=2.2%, 4=9.1%, 8=72.8%, 16=15.7%, 32=0.0%, >=64=0.0% 00:24:11.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.348 complete : 0=0.0%, 4=90.4%, 8=7.4%, 16=2.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.348 issued rwts: total=1952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.348 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.348 filename1: (groupid=0, jobs=1): err= 0: pid=94562: Mon Apr 15 16:16:39 2024 00:24:11.348 read: IOPS=200, BW=802KiB/s (821kB/s)(8032KiB/10015msec) 00:24:11.348 slat (usec): min=4, max=18045, avg=40.20, stdev=696.12 00:24:11.348 clat (msec): min=28, max=160, avg=79.57, stdev=24.25 00:24:11.348 lat (msec): min=28, max=160, avg=79.61, stdev=24.24 00:24:11.348 clat percentiles (msec): 00:24:11.348 | 1.00th=[ 33], 5.00th=[ 49], 10.00th=[ 54], 20.00th=[ 55], 00:24:11.348 | 30.00th=[ 61], 40.00th=[ 73], 50.00th=[ 80], 60.00th=[ 82], 00:24:11.348 | 70.00th=[ 95], 80.00th=[ 106], 90.00th=[ 109], 95.00th=[ 122], 00:24:11.348 | 99.00th=[ 136], 99.50th=[ 136], 99.90th=[ 146], 99.95th=[ 161], 00:24:11.348 | 99.99th=[ 161] 00:24:11.348 bw ( KiB/s): min= 528, max= 1072, per=4.06%, avg=799.20, stdev=165.85, samples=20 00:24:11.348 iops : min= 132, max= 268, avg=199.80, stdev=41.46, samples=20 00:24:11.348 lat (msec) : 50=6.32%, 100=67.88%, 250=25.80% 00:24:11.348 cpu : usr=30.97%, sys=2.18%, ctx=422, majf=0, minf=9 00:24:11.348 IO depths : 1=0.1%, 2=1.3%, 4=4.9%, 8=77.4%, 16=16.2%, 32=0.0%, >=64=0.0% 00:24:11.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.348 complete : 0=0.0%, 4=89.1%, 8=9.8%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.348 issued rwts: total=2008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.348 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.348 filename2: (groupid=0, jobs=1): err= 0: pid=94563: Mon Apr 15 16:16:39 2024 00:24:11.348 read: IOPS=206, BW=826KiB/s (846kB/s)(8268KiB/10005msec) 00:24:11.348 slat (nsec): min=5325, max=64150, avg=14907.25, stdev=6173.26 00:24:11.348 clat (msec): min=6, max=150, avg=77.37, stdev=23.16 00:24:11.348 lat (msec): min=6, max=150, avg=77.38, stdev=23.16 00:24:11.348 clat percentiles (msec): 00:24:11.348 | 1.00th=[ 14], 5.00th=[ 47], 10.00th=[ 54], 20.00th=[ 55], 00:24:11.348 | 30.00th=[ 61], 40.00th=[ 74], 50.00th=[ 80], 60.00th=[ 82], 00:24:11.348 | 70.00th=[ 89], 80.00th=[ 101], 90.00th=[ 107], 95.00th=[ 114], 00:24:11.348 | 99.00th=[ 129], 99.50th=[ 133], 99.90th=[ 140], 99.95th=[ 150], 00:24:11.348 | 99.99th=[ 150] 00:24:11.348 bw ( KiB/s): min= 632, max= 1016, per=4.08%, avg=803.42, stdev=128.80, samples=19 00:24:11.348 iops : min= 158, max= 254, avg=200.84, stdev=32.20, samples=19 00:24:11.348 lat (msec) : 10=0.77%, 20=0.63%, 50=4.69%, 100=73.97%, 250=19.93% 00:24:11.348 cpu : usr=36.12%, sys=2.83%, ctx=414, majf=0, minf=9 00:24:11.348 IO depths : 1=0.2%, 2=1.1%, 4=3.7%, 8=78.9%, 16=16.0%, 32=0.0%, >=64=0.0% 00:24:11.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.348 complete : 0=0.0%, 4=88.7%, 8=10.4%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.348 issued rwts: total=2067,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.348 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.348 filename2: (groupid=0, jobs=1): err= 0: pid=94564: Mon Apr 15 16:16:39 2024 00:24:11.348 read: IOPS=213, BW=854KiB/s (875kB/s)(8544KiB/10002msec) 00:24:11.348 slat (usec): min=7, max=18036, avg=27.60, stdev=447.13 00:24:11.348 clat (msec): min=3, max=132, avg=74.77, stdev=22.61 00:24:11.348 lat (msec): min=3, max=132, avg=74.80, stdev=22.61 00:24:11.348 clat percentiles (msec): 00:24:11.348 | 1.00th=[ 11], 5.00th=[ 36], 10.00th=[ 53], 20.00th=[ 55], 00:24:11.348 | 30.00th=[ 58], 40.00th=[ 70], 50.00th=[ 77], 60.00th=[ 81], 00:24:11.348 | 70.00th=[ 86], 80.00th=[ 99], 90.00th=[ 106], 95.00th=[ 110], 00:24:11.348 | 99.00th=[ 122], 99.50th=[ 124], 99.90th=[ 133], 99.95th=[ 133], 00:24:11.348 | 99.99th=[ 133] 00:24:11.348 bw ( KiB/s): min= 680, max= 1104, per=4.22%, avg=829.47, stdev=134.98, samples=19 00:24:11.348 iops : min= 170, max= 276, avg=207.37, stdev=33.74, samples=19 00:24:11.348 lat (msec) : 4=0.33%, 10=0.70%, 20=0.33%, 50=6.09%, 100=75.51% 00:24:11.348 lat (msec) : 250=17.04% 00:24:11.348 cpu : usr=36.95%, sys=2.77%, ctx=607, majf=0, minf=9 00:24:11.348 IO depths : 1=0.1%, 2=0.5%, 4=1.7%, 8=81.4%, 16=16.4%, 32=0.0%, >=64=0.0% 00:24:11.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.348 complete : 0=0.0%, 4=87.9%, 8=11.7%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.348 issued rwts: total=2136,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.348 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.348 filename2: (groupid=0, jobs=1): err= 0: pid=94565: Mon Apr 15 16:16:39 2024 00:24:11.348 read: IOPS=202, BW=812KiB/s (831kB/s)(8124KiB/10011msec) 00:24:11.348 slat (usec): min=7, max=17837, avg=23.07, stdev=395.55 00:24:11.348 clat (msec): min=12, max=137, avg=78.72, stdev=23.55 00:24:11.348 lat (msec): min=12, max=137, avg=78.74, stdev=23.54 00:24:11.348 clat percentiles (msec): 00:24:11.348 | 1.00th=[ 27], 5.00th=[ 43], 10.00th=[ 52], 20.00th=[ 55], 00:24:11.348 | 30.00th=[ 62], 40.00th=[ 74], 50.00th=[ 80], 60.00th=[ 83], 00:24:11.348 | 70.00th=[ 90], 80.00th=[ 106], 90.00th=[ 109], 95.00th=[ 113], 00:24:11.348 | 99.00th=[ 128], 99.50th=[ 130], 99.90th=[ 136], 99.95th=[ 136], 00:24:11.348 | 99.99th=[ 138] 00:24:11.348 bw ( KiB/s): min= 632, max= 1096, per=4.11%, avg=808.85, stdev=149.13, samples=20 00:24:11.348 iops : min= 158, max= 274, avg=202.20, stdev=37.28, samples=20 00:24:11.348 lat (msec) : 20=0.30%, 50=7.83%, 100=64.55%, 250=27.33% 00:24:11.348 cpu : usr=35.04%, sys=2.79%, ctx=613, majf=0, minf=9 00:24:11.348 IO depths : 1=0.1%, 2=0.8%, 4=3.6%, 8=79.2%, 16=16.3%, 32=0.0%, >=64=0.0% 00:24:11.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.348 complete : 0=0.0%, 4=88.8%, 8=10.2%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.348 issued rwts: total=2031,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.348 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.348 filename2: (groupid=0, jobs=1): err= 0: pid=94566: Mon Apr 15 16:16:39 2024 00:24:11.348 read: IOPS=205, BW=823KiB/s (843kB/s)(8260KiB/10036msec) 00:24:11.348 slat (usec): min=6, max=13032, avg=26.54, stdev=399.86 00:24:11.348 clat (msec): min=8, max=147, avg=77.54, stdev=22.62 00:24:11.348 lat (msec): min=8, max=147, avg=77.57, stdev=22.62 00:24:11.348 clat percentiles (msec): 00:24:11.348 | 1.00th=[ 12], 5.00th=[ 45], 10.00th=[ 53], 20.00th=[ 58], 00:24:11.348 | 30.00th=[ 64], 40.00th=[ 70], 50.00th=[ 79], 60.00th=[ 84], 00:24:11.348 | 70.00th=[ 91], 80.00th=[ 101], 90.00th=[ 108], 95.00th=[ 112], 00:24:11.348 | 99.00th=[ 125], 99.50th=[ 128], 99.90th=[ 142], 99.95th=[ 144], 00:24:11.348 | 99.99th=[ 148] 00:24:11.348 bw ( KiB/s): min= 664, max= 1280, per=4.16%, avg=819.60, stdev=152.64, samples=20 00:24:11.348 iops : min= 166, max= 320, avg=204.90, stdev=38.16, samples=20 00:24:11.348 lat (msec) : 10=0.10%, 20=1.45%, 50=6.25%, 100=70.99%, 250=21.21% 00:24:11.348 cpu : usr=43.01%, sys=3.31%, ctx=700, majf=0, minf=9 00:24:11.348 IO depths : 1=0.1%, 2=1.0%, 4=4.3%, 8=78.2%, 16=16.4%, 32=0.0%, >=64=0.0% 00:24:11.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.348 complete : 0=0.0%, 4=89.0%, 8=9.9%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.348 issued rwts: total=2065,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.348 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.348 filename2: (groupid=0, jobs=1): err= 0: pid=94567: Mon Apr 15 16:16:39 2024 00:24:11.348 read: IOPS=210, BW=840KiB/s (861kB/s)(8408KiB/10004msec) 00:24:11.348 slat (usec): min=3, max=9779, avg=18.94, stdev=213.07 00:24:11.348 clat (msec): min=6, max=149, avg=76.04, stdev=24.23 00:24:11.348 lat (msec): min=6, max=149, avg=76.05, stdev=24.22 00:24:11.348 clat percentiles (msec): 00:24:11.348 | 1.00th=[ 13], 5.00th=[ 41], 10.00th=[ 49], 20.00th=[ 55], 00:24:11.348 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 77], 60.00th=[ 82], 00:24:11.348 | 70.00th=[ 88], 80.00th=[ 102], 90.00th=[ 108], 95.00th=[ 111], 00:24:11.348 | 99.00th=[ 136], 99.50th=[ 136], 99.90th=[ 144], 99.95th=[ 144], 00:24:11.348 | 99.99th=[ 150] 00:24:11.348 bw ( KiB/s): min= 568, max= 1048, per=4.12%, avg=810.95, stdev=160.49, samples=19 00:24:11.349 iops : min= 142, max= 262, avg=202.74, stdev=40.12, samples=19 00:24:11.349 lat (msec) : 10=0.95%, 20=0.29%, 50=9.94%, 100=66.08%, 250=22.74% 00:24:11.349 cpu : usr=37.34%, sys=2.98%, ctx=464, majf=0, minf=9 00:24:11.349 IO depths : 1=0.1%, 2=1.8%, 4=7.2%, 8=75.7%, 16=15.2%, 32=0.0%, >=64=0.0% 00:24:11.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.349 complete : 0=0.0%, 4=89.3%, 8=9.1%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.349 issued rwts: total=2102,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.349 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.349 filename2: (groupid=0, jobs=1): err= 0: pid=94568: Mon Apr 15 16:16:39 2024 00:24:11.349 read: IOPS=211, BW=848KiB/s (868kB/s)(8484KiB/10007msec) 00:24:11.349 slat (usec): min=3, max=18012, avg=28.86, stdev=482.51 00:24:11.349 clat (msec): min=7, max=159, avg=75.36, stdev=24.59 00:24:11.349 lat (msec): min=7, max=159, avg=75.39, stdev=24.61 00:24:11.349 clat percentiles (msec): 00:24:11.349 | 1.00th=[ 13], 5.00th=[ 40], 10.00th=[ 48], 20.00th=[ 55], 00:24:11.349 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 74], 60.00th=[ 80], 00:24:11.349 | 70.00th=[ 87], 80.00th=[ 101], 90.00th=[ 109], 95.00th=[ 121], 00:24:11.349 | 99.00th=[ 131], 99.50th=[ 131], 99.90th=[ 150], 99.95th=[ 159], 00:24:11.349 | 99.99th=[ 159] 00:24:11.349 bw ( KiB/s): min= 624, max= 1088, per=4.18%, avg=821.89, stdev=159.22, samples=19 00:24:11.349 iops : min= 156, max= 272, avg=205.47, stdev=39.80, samples=19 00:24:11.349 lat (msec) : 10=0.94%, 20=0.28%, 50=10.66%, 100=67.85%, 250=20.27% 00:24:11.349 cpu : usr=44.04%, sys=3.25%, ctx=630, majf=0, minf=9 00:24:11.349 IO depths : 1=0.1%, 2=0.8%, 4=3.1%, 8=80.0%, 16=16.1%, 32=0.0%, >=64=0.0% 00:24:11.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.349 complete : 0=0.0%, 4=88.2%, 8=11.0%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.349 issued rwts: total=2121,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.349 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.349 filename2: (groupid=0, jobs=1): err= 0: pid=94569: Mon Apr 15 16:16:39 2024 00:24:11.349 read: IOPS=206, BW=825KiB/s (845kB/s)(8256KiB/10002msec) 00:24:11.349 slat (usec): min=3, max=18028, avg=21.96, stdev=396.56 00:24:11.349 clat (msec): min=2, max=185, avg=77.46, stdev=24.83 00:24:11.349 lat (msec): min=2, max=185, avg=77.48, stdev=24.84 00:24:11.349 clat percentiles (msec): 00:24:11.349 | 1.00th=[ 9], 5.00th=[ 47], 10.00th=[ 54], 20.00th=[ 55], 00:24:11.349 | 30.00th=[ 59], 40.00th=[ 74], 50.00th=[ 80], 60.00th=[ 82], 00:24:11.349 | 70.00th=[ 88], 80.00th=[ 103], 90.00th=[ 108], 95.00th=[ 114], 00:24:11.349 | 99.00th=[ 136], 99.50th=[ 159], 99.90th=[ 159], 99.95th=[ 186], 00:24:11.349 | 99.99th=[ 186] 00:24:11.349 bw ( KiB/s): min= 632, max= 1056, per=4.05%, avg=796.21, stdev=132.00, samples=19 00:24:11.349 iops : min= 158, max= 264, avg=199.05, stdev=33.00, samples=19 00:24:11.349 lat (msec) : 4=0.68%, 10=0.78%, 20=0.63%, 50=4.55%, 100=71.56% 00:24:11.349 lat (msec) : 250=21.80% 00:24:11.349 cpu : usr=31.10%, sys=2.18%, ctx=388, majf=0, minf=9 00:24:11.349 IO depths : 1=0.1%, 2=0.6%, 4=2.4%, 8=80.1%, 16=16.9%, 32=0.0%, >=64=0.0% 00:24:11.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.349 complete : 0=0.0%, 4=88.6%, 8=10.9%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.349 issued rwts: total=2064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.349 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.349 filename2: (groupid=0, jobs=1): err= 0: pid=94570: Mon Apr 15 16:16:39 2024 00:24:11.349 read: IOPS=208, BW=834KiB/s (854kB/s)(8348KiB/10004msec) 00:24:11.349 slat (usec): min=3, max=18064, avg=22.30, stdev=395.16 00:24:11.349 clat (msec): min=5, max=162, avg=76.60, stdev=24.98 00:24:11.349 lat (msec): min=5, max=162, avg=76.62, stdev=24.98 00:24:11.349 clat percentiles (msec): 00:24:11.349 | 1.00th=[ 11], 5.00th=[ 46], 10.00th=[ 52], 20.00th=[ 55], 00:24:11.349 | 30.00th=[ 58], 40.00th=[ 69], 50.00th=[ 77], 60.00th=[ 82], 00:24:11.349 | 70.00th=[ 86], 80.00th=[ 104], 90.00th=[ 109], 95.00th=[ 116], 00:24:11.349 | 99.00th=[ 133], 99.50th=[ 144], 99.90th=[ 161], 99.95th=[ 163], 00:24:11.349 | 99.99th=[ 163] 00:24:11.349 bw ( KiB/s): min= 616, max= 1104, per=4.11%, avg=808.89, stdev=144.60, samples=19 00:24:11.349 iops : min= 154, max= 276, avg=202.21, stdev=36.15, samples=19 00:24:11.349 lat (msec) : 10=0.91%, 20=0.62%, 50=6.66%, 100=68.71%, 250=23.10% 00:24:11.349 cpu : usr=30.77%, sys=2.48%, ctx=445, majf=0, minf=9 00:24:11.349 IO depths : 1=0.1%, 2=0.6%, 4=2.6%, 8=80.5%, 16=16.3%, 32=0.0%, >=64=0.0% 00:24:11.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.349 complete : 0=0.0%, 4=88.3%, 8=11.0%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.349 issued rwts: total=2087,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.349 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.349 00:24:11.349 Run status group 0 (all jobs): 00:24:11.349 READ: bw=19.2MiB/s (20.1MB/s), 779KiB/s-917KiB/s (798kB/s-939kB/s), io=193MiB (203MB), run=10002-10067msec 00:24:11.349 16:16:39 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:24:11.349 16:16:39 -- target/dif.sh@43 -- # local sub 00:24:11.349 16:16:39 -- target/dif.sh@45 -- # for sub in "$@" 00:24:11.349 16:16:39 -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:11.349 16:16:39 -- target/dif.sh@36 -- # local sub_id=0 00:24:11.349 16:16:39 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:11.349 16:16:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:11.349 16:16:39 -- common/autotest_common.sh@10 -- # set +x 00:24:11.349 16:16:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:11.349 16:16:39 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:11.349 16:16:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:11.349 16:16:39 -- common/autotest_common.sh@10 -- # set +x 00:24:11.349 16:16:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:11.349 16:16:39 -- target/dif.sh@45 -- # for sub in "$@" 00:24:11.349 16:16:39 -- target/dif.sh@46 -- # destroy_subsystem 1 00:24:11.349 16:16:39 -- target/dif.sh@36 -- # local sub_id=1 00:24:11.349 16:16:39 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:11.349 16:16:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:11.349 16:16:39 -- common/autotest_common.sh@10 -- # set +x 00:24:11.349 16:16:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:11.349 16:16:39 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:24:11.349 16:16:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:11.349 16:16:39 -- common/autotest_common.sh@10 -- # set +x 00:24:11.349 16:16:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:11.349 16:16:39 -- target/dif.sh@45 -- # for sub in "$@" 00:24:11.349 16:16:39 -- target/dif.sh@46 -- # destroy_subsystem 2 00:24:11.349 16:16:39 -- target/dif.sh@36 -- # local sub_id=2 00:24:11.349 16:16:39 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:11.349 16:16:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:11.349 16:16:39 -- common/autotest_common.sh@10 -- # set +x 00:24:11.349 16:16:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:11.349 16:16:39 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:24:11.349 16:16:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:11.349 16:16:39 -- common/autotest_common.sh@10 -- # set +x 00:24:11.349 16:16:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:11.349 16:16:39 -- target/dif.sh@115 -- # NULL_DIF=1 00:24:11.349 16:16:39 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:24:11.349 16:16:39 -- target/dif.sh@115 -- # numjobs=2 00:24:11.349 16:16:39 -- target/dif.sh@115 -- # iodepth=8 00:24:11.349 16:16:39 -- target/dif.sh@115 -- # runtime=5 00:24:11.349 16:16:39 -- target/dif.sh@115 -- # files=1 00:24:11.349 16:16:39 -- target/dif.sh@117 -- # create_subsystems 0 1 00:24:11.349 16:16:39 -- target/dif.sh@28 -- # local sub 00:24:11.349 16:16:39 -- target/dif.sh@30 -- # for sub in "$@" 00:24:11.349 16:16:39 -- target/dif.sh@31 -- # create_subsystem 0 00:24:11.349 16:16:39 -- target/dif.sh@18 -- # local sub_id=0 00:24:11.349 16:16:39 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:24:11.349 16:16:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:11.349 16:16:39 -- common/autotest_common.sh@10 -- # set +x 00:24:11.349 bdev_null0 00:24:11.349 16:16:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:11.349 16:16:39 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:11.349 16:16:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:11.349 16:16:39 -- common/autotest_common.sh@10 -- # set +x 00:24:11.349 16:16:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:11.349 16:16:39 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:11.349 16:16:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:11.349 16:16:39 -- common/autotest_common.sh@10 -- # set +x 00:24:11.349 16:16:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:11.349 16:16:39 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:11.349 16:16:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:11.349 16:16:39 -- common/autotest_common.sh@10 -- # set +x 00:24:11.349 [2024-04-15 16:16:39.452130] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:11.349 16:16:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:11.349 16:16:39 -- target/dif.sh@30 -- # for sub in "$@" 00:24:11.349 16:16:39 -- target/dif.sh@31 -- # create_subsystem 1 00:24:11.349 16:16:39 -- target/dif.sh@18 -- # local sub_id=1 00:24:11.349 16:16:39 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:24:11.349 16:16:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:11.349 16:16:39 -- common/autotest_common.sh@10 -- # set +x 00:24:11.349 bdev_null1 00:24:11.349 16:16:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:11.349 16:16:39 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:24:11.349 16:16:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:11.349 16:16:39 -- common/autotest_common.sh@10 -- # set +x 00:24:11.349 16:16:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:11.349 16:16:39 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:24:11.350 16:16:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:11.350 16:16:39 -- common/autotest_common.sh@10 -- # set +x 00:24:11.350 16:16:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:11.350 16:16:39 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:11.350 16:16:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:11.350 16:16:39 -- common/autotest_common.sh@10 -- # set +x 00:24:11.350 16:16:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:11.350 16:16:39 -- target/dif.sh@118 -- # fio /dev/fd/62 00:24:11.350 16:16:39 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:24:11.350 16:16:39 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:24:11.350 16:16:39 -- nvmf/common.sh@521 -- # config=() 00:24:11.350 16:16:39 -- nvmf/common.sh@521 -- # local subsystem config 00:24:11.350 16:16:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:11.350 16:16:39 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:11.350 16:16:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:11.350 { 00:24:11.350 "params": { 00:24:11.350 "name": "Nvme$subsystem", 00:24:11.350 "trtype": "$TEST_TRANSPORT", 00:24:11.350 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:11.350 "adrfam": "ipv4", 00:24:11.350 "trsvcid": "$NVMF_PORT", 00:24:11.350 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:11.350 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:11.350 "hdgst": ${hdgst:-false}, 00:24:11.350 "ddgst": ${ddgst:-false} 00:24:11.350 }, 00:24:11.350 "method": "bdev_nvme_attach_controller" 00:24:11.350 } 00:24:11.350 EOF 00:24:11.350 )") 00:24:11.350 16:16:39 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:11.350 16:16:39 -- target/dif.sh@82 -- # gen_fio_conf 00:24:11.350 16:16:39 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:24:11.350 16:16:39 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:11.350 16:16:39 -- target/dif.sh@54 -- # local file 00:24:11.350 16:16:39 -- common/autotest_common.sh@1325 -- # local sanitizers 00:24:11.350 16:16:39 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:11.350 16:16:39 -- target/dif.sh@56 -- # cat 00:24:11.350 16:16:39 -- common/autotest_common.sh@1327 -- # shift 00:24:11.350 16:16:39 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:24:11.350 16:16:39 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:24:11.350 16:16:39 -- nvmf/common.sh@543 -- # cat 00:24:11.350 16:16:39 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:11.350 16:16:39 -- common/autotest_common.sh@1331 -- # grep libasan 00:24:11.350 16:16:39 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:24:11.350 16:16:39 -- target/dif.sh@72 -- # (( file = 1 )) 00:24:11.350 16:16:39 -- target/dif.sh@72 -- # (( file <= files )) 00:24:11.350 16:16:39 -- target/dif.sh@73 -- # cat 00:24:11.350 16:16:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:11.350 16:16:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:11.350 { 00:24:11.350 "params": { 00:24:11.350 "name": "Nvme$subsystem", 00:24:11.350 "trtype": "$TEST_TRANSPORT", 00:24:11.350 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:11.350 "adrfam": "ipv4", 00:24:11.350 "trsvcid": "$NVMF_PORT", 00:24:11.350 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:11.350 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:11.350 "hdgst": ${hdgst:-false}, 00:24:11.350 "ddgst": ${ddgst:-false} 00:24:11.350 }, 00:24:11.350 "method": "bdev_nvme_attach_controller" 00:24:11.350 } 00:24:11.350 EOF 00:24:11.350 )") 00:24:11.350 16:16:39 -- target/dif.sh@72 -- # (( file++ )) 00:24:11.350 16:16:39 -- target/dif.sh@72 -- # (( file <= files )) 00:24:11.350 16:16:39 -- nvmf/common.sh@543 -- # cat 00:24:11.350 16:16:39 -- nvmf/common.sh@545 -- # jq . 00:24:11.350 16:16:39 -- nvmf/common.sh@546 -- # IFS=, 00:24:11.350 16:16:39 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:24:11.350 "params": { 00:24:11.350 "name": "Nvme0", 00:24:11.350 "trtype": "tcp", 00:24:11.350 "traddr": "10.0.0.2", 00:24:11.350 "adrfam": "ipv4", 00:24:11.350 "trsvcid": "4420", 00:24:11.350 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:11.350 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:11.350 "hdgst": false, 00:24:11.350 "ddgst": false 00:24:11.350 }, 00:24:11.350 "method": "bdev_nvme_attach_controller" 00:24:11.350 },{ 00:24:11.350 "params": { 00:24:11.350 "name": "Nvme1", 00:24:11.350 "trtype": "tcp", 00:24:11.350 "traddr": "10.0.0.2", 00:24:11.350 "adrfam": "ipv4", 00:24:11.350 "trsvcid": "4420", 00:24:11.350 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:11.350 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:11.350 "hdgst": false, 00:24:11.350 "ddgst": false 00:24:11.350 }, 00:24:11.350 "method": "bdev_nvme_attach_controller" 00:24:11.350 }' 00:24:11.350 16:16:39 -- common/autotest_common.sh@1331 -- # asan_lib= 00:24:11.350 16:16:39 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:24:11.350 16:16:39 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:24:11.350 16:16:39 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:11.350 16:16:39 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:24:11.350 16:16:39 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:24:11.350 16:16:39 -- common/autotest_common.sh@1331 -- # asan_lib= 00:24:11.350 16:16:39 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:24:11.350 16:16:39 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:11.350 16:16:39 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:11.350 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:24:11.350 ... 00:24:11.350 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:24:11.350 ... 00:24:11.350 fio-3.35 00:24:11.350 Starting 4 threads 00:24:11.350 [2024-04-15 16:16:40.104435] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:24:11.350 [2024-04-15 16:16:40.104503] rpc.c: 167:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:24:15.577 00:24:15.577 filename0: (groupid=0, jobs=1): err= 0: pid=94697: Mon Apr 15 16:16:45 2024 00:24:15.577 read: IOPS=2130, BW=16.6MiB/s (17.5MB/s)(83.3MiB/5001msec) 00:24:15.577 slat (nsec): min=6663, max=83062, avg=13923.23, stdev=5041.64 00:24:15.577 clat (usec): min=909, max=12388, avg=3712.87, stdev=989.81 00:24:15.577 lat (usec): min=918, max=12404, avg=3726.80, stdev=989.42 00:24:15.577 clat percentiles (usec): 00:24:15.577 | 1.00th=[ 1467], 5.00th=[ 1991], 10.00th=[ 2311], 20.00th=[ 2573], 00:24:15.577 | 30.00th=[ 3359], 40.00th=[ 3720], 50.00th=[ 3884], 60.00th=[ 4047], 00:24:15.577 | 70.00th=[ 4359], 80.00th=[ 4555], 90.00th=[ 4686], 95.00th=[ 4752], 00:24:15.577 | 99.00th=[ 6587], 99.50th=[ 6915], 99.90th=[ 7439], 99.95th=[ 9110], 00:24:15.577 | 99.99th=[ 9110] 00:24:15.577 bw ( KiB/s): min=13248, max=19248, per=23.88%, avg=16853.33, stdev=2114.76, samples=9 00:24:15.577 iops : min= 1656, max= 2406, avg=2106.67, stdev=264.34, samples=9 00:24:15.577 lat (usec) : 1000=0.02% 00:24:15.577 lat (msec) : 2=5.08%, 4=52.56%, 10=42.34%, 20=0.01% 00:24:15.577 cpu : usr=90.86%, sys=8.02%, ctx=147, majf=0, minf=0 00:24:15.577 IO depths : 1=0.1%, 2=10.8%, 4=58.1%, 8=31.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:15.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:15.577 complete : 0=0.0%, 4=95.9%, 8=4.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:15.577 issued rwts: total=10657,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:15.577 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:15.577 filename0: (groupid=0, jobs=1): err= 0: pid=94698: Mon Apr 15 16:16:45 2024 00:24:15.577 read: IOPS=2240, BW=17.5MiB/s (18.4MB/s)(87.6MiB/5004msec) 00:24:15.577 slat (usec): min=4, max=190, avg=11.27, stdev= 5.08 00:24:15.577 clat (usec): min=689, max=10168, avg=3536.75, stdev=1104.61 00:24:15.577 lat (usec): min=698, max=10176, avg=3548.02, stdev=1104.88 00:24:15.577 clat percentiles (usec): 00:24:15.577 | 1.00th=[ 1188], 5.00th=[ 1287], 10.00th=[ 1762], 20.00th=[ 2573], 00:24:15.577 | 30.00th=[ 2900], 40.00th=[ 3523], 50.00th=[ 3785], 60.00th=[ 4113], 00:24:15.577 | 70.00th=[ 4424], 80.00th=[ 4555], 90.00th=[ 4686], 95.00th=[ 4752], 00:24:15.577 | 99.00th=[ 5145], 99.50th=[ 5473], 99.90th=[ 5866], 99.95th=[ 7439], 00:24:15.577 | 99.99th=[ 8455] 00:24:15.577 bw ( KiB/s): min=13952, max=22880, per=24.76%, avg=17473.78, stdev=3052.65, samples=9 00:24:15.577 iops : min= 1744, max= 2860, avg=2184.22, stdev=381.58, samples=9 00:24:15.577 lat (usec) : 750=0.03%, 1000=0.04% 00:24:15.577 lat (msec) : 2=12.83%, 4=44.41%, 10=42.69%, 20=0.01% 00:24:15.577 cpu : usr=89.69%, sys=9.29%, ctx=20, majf=0, minf=0 00:24:15.577 IO depths : 1=0.1%, 2=7.4%, 4=60.1%, 8=32.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:15.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:15.577 complete : 0=0.0%, 4=97.2%, 8=2.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:15.577 issued rwts: total=11210,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:15.577 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:15.577 filename1: (groupid=0, jobs=1): err= 0: pid=94699: Mon Apr 15 16:16:45 2024 00:24:15.577 read: IOPS=2228, BW=17.4MiB/s (18.3MB/s)(87.1MiB/5002msec) 00:24:15.577 slat (usec): min=6, max=867, avg=14.96, stdev= 9.43 00:24:15.577 clat (usec): min=687, max=10150, avg=3546.88, stdev=957.08 00:24:15.577 lat (usec): min=694, max=10159, avg=3561.84, stdev=956.92 00:24:15.577 clat percentiles (usec): 00:24:15.577 | 1.00th=[ 1319], 5.00th=[ 1958], 10.00th=[ 2245], 20.00th=[ 2507], 00:24:15.577 | 30.00th=[ 2835], 40.00th=[ 3556], 50.00th=[ 3785], 60.00th=[ 3916], 00:24:15.577 | 70.00th=[ 4228], 80.00th=[ 4424], 90.00th=[ 4555], 95.00th=[ 4686], 00:24:15.577 | 99.00th=[ 5473], 99.50th=[ 5800], 99.90th=[ 6325], 99.95th=[ 7504], 00:24:15.577 | 99.99th=[ 8455] 00:24:15.577 bw ( KiB/s): min=16768, max=19248, per=25.57%, avg=18042.67, stdev=1126.72, samples=9 00:24:15.577 iops : min= 2096, max= 2406, avg=2255.33, stdev=140.84, samples=9 00:24:15.577 lat (usec) : 750=0.04%, 1000=0.04% 00:24:15.577 lat (msec) : 2=6.29%, 4=56.46%, 10=37.15%, 20=0.01% 00:24:15.577 cpu : usr=90.20%, sys=8.48%, ctx=400, majf=0, minf=0 00:24:15.577 IO depths : 1=0.1%, 2=7.5%, 4=59.9%, 8=32.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:15.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:15.577 complete : 0=0.0%, 4=97.2%, 8=2.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:15.577 issued rwts: total=11149,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:15.577 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:15.577 filename1: (groupid=0, jobs=1): err= 0: pid=94700: Mon Apr 15 16:16:45 2024 00:24:15.577 read: IOPS=2223, BW=17.4MiB/s (18.2MB/s)(86.9MiB/5001msec) 00:24:15.577 slat (nsec): min=3400, max=48333, avg=15095.43, stdev=4204.89 00:24:15.577 clat (usec): min=993, max=10124, avg=3555.45, stdev=896.70 00:24:15.577 lat (usec): min=1000, max=10134, avg=3570.54, stdev=896.31 00:24:15.577 clat percentiles (usec): 00:24:15.577 | 1.00th=[ 1647], 5.00th=[ 1991], 10.00th=[ 2278], 20.00th=[ 2540], 00:24:15.577 | 30.00th=[ 2868], 40.00th=[ 3589], 50.00th=[ 3785], 60.00th=[ 3949], 00:24:15.577 | 70.00th=[ 4228], 80.00th=[ 4424], 90.00th=[ 4555], 95.00th=[ 4621], 00:24:15.577 | 99.00th=[ 5014], 99.50th=[ 5276], 99.90th=[ 5735], 99.95th=[ 7373], 00:24:15.577 | 99.99th=[ 8356] 00:24:15.577 bw ( KiB/s): min=16640, max=19296, per=25.47%, avg=17975.11, stdev=921.46, samples=9 00:24:15.577 iops : min= 2080, max= 2412, avg=2246.89, stdev=115.18, samples=9 00:24:15.577 lat (usec) : 1000=0.01% 00:24:15.577 lat (msec) : 2=5.41%, 4=57.08%, 10=37.49%, 20=0.01% 00:24:15.577 cpu : usr=89.86%, sys=9.18%, ctx=119, majf=0, minf=0 00:24:15.577 IO depths : 1=0.1%, 2=8.0%, 4=59.8%, 8=32.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:15.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:15.577 complete : 0=0.0%, 4=97.0%, 8=3.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:15.577 issued rwts: total=11119,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:15.577 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:15.577 00:24:15.577 Run status group 0 (all jobs): 00:24:15.577 READ: bw=68.9MiB/s (72.3MB/s), 16.6MiB/s-17.5MiB/s (17.5MB/s-18.4MB/s), io=345MiB (362MB), run=5001-5004msec 00:24:15.577 16:16:45 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:24:15.577 16:16:45 -- target/dif.sh@43 -- # local sub 00:24:15.577 16:16:45 -- target/dif.sh@45 -- # for sub in "$@" 00:24:15.577 16:16:45 -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:15.577 16:16:45 -- target/dif.sh@36 -- # local sub_id=0 00:24:15.577 16:16:45 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:15.577 16:16:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.577 16:16:45 -- common/autotest_common.sh@10 -- # set +x 00:24:15.577 16:16:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.577 16:16:45 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:15.577 16:16:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.577 16:16:45 -- common/autotest_common.sh@10 -- # set +x 00:24:15.577 16:16:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.577 16:16:45 -- target/dif.sh@45 -- # for sub in "$@" 00:24:15.577 16:16:45 -- target/dif.sh@46 -- # destroy_subsystem 1 00:24:15.577 16:16:45 -- target/dif.sh@36 -- # local sub_id=1 00:24:15.577 16:16:45 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:15.577 16:16:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.577 16:16:45 -- common/autotest_common.sh@10 -- # set +x 00:24:15.577 16:16:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.577 16:16:45 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:24:15.577 16:16:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.577 16:16:45 -- common/autotest_common.sh@10 -- # set +x 00:24:15.577 16:16:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.577 00:24:15.577 real 0m23.263s 00:24:15.577 user 2m1.259s 00:24:15.577 sys 0m11.084s 00:24:15.577 16:16:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:15.577 ************************************ 00:24:15.577 16:16:45 -- common/autotest_common.sh@10 -- # set +x 00:24:15.577 END TEST fio_dif_rand_params 00:24:15.577 ************************************ 00:24:15.577 16:16:45 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:24:15.577 16:16:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:15.577 16:16:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:15.577 16:16:45 -- common/autotest_common.sh@10 -- # set +x 00:24:15.836 ************************************ 00:24:15.836 START TEST fio_dif_digest 00:24:15.836 ************************************ 00:24:15.836 16:16:45 -- common/autotest_common.sh@1111 -- # fio_dif_digest 00:24:15.836 16:16:45 -- target/dif.sh@123 -- # local NULL_DIF 00:24:15.836 16:16:45 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:24:15.836 16:16:45 -- target/dif.sh@125 -- # local hdgst ddgst 00:24:15.836 16:16:45 -- target/dif.sh@127 -- # NULL_DIF=3 00:24:15.836 16:16:45 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:24:15.836 16:16:45 -- target/dif.sh@127 -- # numjobs=3 00:24:15.836 16:16:45 -- target/dif.sh@127 -- # iodepth=3 00:24:15.836 16:16:45 -- target/dif.sh@127 -- # runtime=10 00:24:15.836 16:16:45 -- target/dif.sh@128 -- # hdgst=true 00:24:15.836 16:16:45 -- target/dif.sh@128 -- # ddgst=true 00:24:15.836 16:16:45 -- target/dif.sh@130 -- # create_subsystems 0 00:24:15.836 16:16:45 -- target/dif.sh@28 -- # local sub 00:24:15.836 16:16:45 -- target/dif.sh@30 -- # for sub in "$@" 00:24:15.836 16:16:45 -- target/dif.sh@31 -- # create_subsystem 0 00:24:15.836 16:16:45 -- target/dif.sh@18 -- # local sub_id=0 00:24:15.836 16:16:45 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:24:15.836 16:16:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.836 16:16:45 -- common/autotest_common.sh@10 -- # set +x 00:24:15.836 bdev_null0 00:24:15.836 16:16:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.836 16:16:45 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:15.836 16:16:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.836 16:16:45 -- common/autotest_common.sh@10 -- # set +x 00:24:15.837 16:16:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.837 16:16:45 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:15.837 16:16:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.837 16:16:45 -- common/autotest_common.sh@10 -- # set +x 00:24:15.837 16:16:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.837 16:16:45 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:15.837 16:16:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.837 16:16:45 -- common/autotest_common.sh@10 -- # set +x 00:24:15.837 [2024-04-15 16:16:45.619569] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:15.837 16:16:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.837 16:16:45 -- target/dif.sh@131 -- # fio /dev/fd/62 00:24:15.837 16:16:45 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:24:15.837 16:16:45 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:24:15.837 16:16:45 -- nvmf/common.sh@521 -- # config=() 00:24:15.837 16:16:45 -- nvmf/common.sh@521 -- # local subsystem config 00:24:15.837 16:16:45 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:15.837 16:16:45 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:15.837 { 00:24:15.837 "params": { 00:24:15.837 "name": "Nvme$subsystem", 00:24:15.837 "trtype": "$TEST_TRANSPORT", 00:24:15.837 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:15.837 "adrfam": "ipv4", 00:24:15.837 "trsvcid": "$NVMF_PORT", 00:24:15.837 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:15.837 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:15.837 "hdgst": ${hdgst:-false}, 00:24:15.837 "ddgst": ${ddgst:-false} 00:24:15.837 }, 00:24:15.837 "method": "bdev_nvme_attach_controller" 00:24:15.837 } 00:24:15.837 EOF 00:24:15.837 )") 00:24:15.837 16:16:45 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:15.837 16:16:45 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:15.837 16:16:45 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:24:15.837 16:16:45 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:15.837 16:16:45 -- common/autotest_common.sh@1325 -- # local sanitizers 00:24:15.837 16:16:45 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:15.837 16:16:45 -- common/autotest_common.sh@1327 -- # shift 00:24:15.837 16:16:45 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:24:15.837 16:16:45 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:24:15.837 16:16:45 -- nvmf/common.sh@543 -- # cat 00:24:15.837 16:16:45 -- target/dif.sh@82 -- # gen_fio_conf 00:24:15.837 16:16:45 -- target/dif.sh@54 -- # local file 00:24:15.837 16:16:45 -- target/dif.sh@56 -- # cat 00:24:15.837 16:16:45 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:15.837 16:16:45 -- common/autotest_common.sh@1331 -- # grep libasan 00:24:15.837 16:16:45 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:24:15.837 16:16:45 -- nvmf/common.sh@545 -- # jq . 00:24:15.837 16:16:45 -- target/dif.sh@72 -- # (( file = 1 )) 00:24:15.837 16:16:45 -- target/dif.sh@72 -- # (( file <= files )) 00:24:15.837 16:16:45 -- nvmf/common.sh@546 -- # IFS=, 00:24:15.837 16:16:45 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:24:15.837 "params": { 00:24:15.837 "name": "Nvme0", 00:24:15.837 "trtype": "tcp", 00:24:15.837 "traddr": "10.0.0.2", 00:24:15.837 "adrfam": "ipv4", 00:24:15.837 "trsvcid": "4420", 00:24:15.837 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:15.837 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:15.837 "hdgst": true, 00:24:15.837 "ddgst": true 00:24:15.837 }, 00:24:15.837 "method": "bdev_nvme_attach_controller" 00:24:15.837 }' 00:24:15.837 16:16:45 -- common/autotest_common.sh@1331 -- # asan_lib= 00:24:15.837 16:16:45 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:24:15.837 16:16:45 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:24:15.837 16:16:45 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:15.837 16:16:45 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:24:15.837 16:16:45 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:24:15.837 16:16:45 -- common/autotest_common.sh@1331 -- # asan_lib= 00:24:15.837 16:16:45 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:24:15.837 16:16:45 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:15.837 16:16:45 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:16.094 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:24:16.094 ... 00:24:16.094 fio-3.35 00:24:16.094 Starting 3 threads 00:24:16.352 [2024-04-15 16:16:46.182277] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:24:16.353 [2024-04-15 16:16:46.182355] rpc.c: 167:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:24:28.553 00:24:28.553 filename0: (groupid=0, jobs=1): err= 0: pid=94810: Mon Apr 15 16:16:56 2024 00:24:28.553 read: IOPS=241, BW=30.1MiB/s (31.6MB/s)(302MiB/10001msec) 00:24:28.553 slat (nsec): min=6903, max=81161, avg=15440.76, stdev=9626.19 00:24:28.553 clat (usec): min=11096, max=14119, avg=12397.42, stdev=439.91 00:24:28.553 lat (usec): min=11106, max=14149, avg=12412.86, stdev=440.40 00:24:28.553 clat percentiles (usec): 00:24:28.553 | 1.00th=[11469], 5.00th=[11600], 10.00th=[11731], 20.00th=[11994], 00:24:28.553 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12387], 60.00th=[12518], 00:24:28.553 | 70.00th=[12649], 80.00th=[12649], 90.00th=[12911], 95.00th=[13042], 00:24:28.553 | 99.00th=[13698], 99.50th=[13960], 99.90th=[14091], 99.95th=[14091], 00:24:28.553 | 99.99th=[14091] 00:24:28.553 bw ( KiB/s): min=29184, max=33024, per=33.30%, avg=30841.26, stdev=859.15, samples=19 00:24:28.553 iops : min= 228, max= 258, avg=240.95, stdev= 6.71, samples=19 00:24:28.553 lat (msec) : 20=100.00% 00:24:28.553 cpu : usr=92.82%, sys=6.26%, ctx=27, majf=0, minf=9 00:24:28.553 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:28.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:28.553 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:28.553 issued rwts: total=2412,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:28.553 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:28.553 filename0: (groupid=0, jobs=1): err= 0: pid=94811: Mon Apr 15 16:16:56 2024 00:24:28.553 read: IOPS=241, BW=30.2MiB/s (31.6MB/s)(302MiB/10009msec) 00:24:28.553 slat (usec): min=6, max=189, avg=15.12, stdev=11.35 00:24:28.553 clat (usec): min=9144, max=14104, avg=12390.92, stdev=450.82 00:24:28.553 lat (usec): min=9152, max=14144, avg=12406.04, stdev=452.20 00:24:28.553 clat percentiles (usec): 00:24:28.553 | 1.00th=[11338], 5.00th=[11600], 10.00th=[11731], 20.00th=[11994], 00:24:28.553 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12387], 60.00th=[12518], 00:24:28.553 | 70.00th=[12649], 80.00th=[12649], 90.00th=[12911], 95.00th=[13042], 00:24:28.553 | 99.00th=[13566], 99.50th=[13829], 99.90th=[14091], 99.95th=[14091], 00:24:28.553 | 99.99th=[14091] 00:24:28.553 bw ( KiB/s): min=29184, max=32256, per=33.34%, avg=30881.68, stdev=871.11, samples=19 00:24:28.553 iops : min= 228, max= 252, avg=241.26, stdev= 6.81, samples=19 00:24:28.553 lat (msec) : 10=0.12%, 20=99.88% 00:24:28.553 cpu : usr=92.10%, sys=6.71%, ctx=211, majf=0, minf=0 00:24:28.553 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:28.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:28.553 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:28.553 issued rwts: total=2415,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:28.553 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:28.553 filename0: (groupid=0, jobs=1): err= 0: pid=94812: Mon Apr 15 16:16:56 2024 00:24:28.553 read: IOPS=241, BW=30.2MiB/s (31.6MB/s)(302MiB/10007msec) 00:24:28.553 slat (nsec): min=6465, max=51359, avg=12134.80, stdev=5787.11 00:24:28.553 clat (usec): min=7579, max=14067, avg=12399.82, stdev=464.63 00:24:28.553 lat (usec): min=7587, max=14097, avg=12411.96, stdev=465.39 00:24:28.553 clat percentiles (usec): 00:24:28.553 | 1.00th=[11338], 5.00th=[11600], 10.00th=[11863], 20.00th=[11994], 00:24:28.553 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12387], 60.00th=[12518], 00:24:28.553 | 70.00th=[12649], 80.00th=[12780], 90.00th=[12911], 95.00th=[13042], 00:24:28.553 | 99.00th=[13566], 99.50th=[13829], 99.90th=[14091], 99.95th=[14091], 00:24:28.553 | 99.99th=[14091] 00:24:28.553 bw ( KiB/s): min=29184, max=33024, per=33.30%, avg=30844.47, stdev=858.79, samples=19 00:24:28.553 iops : min= 228, max= 258, avg=240.95, stdev= 6.71, samples=19 00:24:28.553 lat (msec) : 10=0.12%, 20=99.88% 00:24:28.553 cpu : usr=92.92%, sys=6.27%, ctx=77, majf=0, minf=9 00:24:28.553 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:28.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:28.553 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:28.553 issued rwts: total=2415,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:28.553 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:28.553 00:24:28.553 Run status group 0 (all jobs): 00:24:28.553 READ: bw=90.4MiB/s (94.8MB/s), 30.1MiB/s-30.2MiB/s (31.6MB/s-31.6MB/s), io=905MiB (949MB), run=10001-10009msec 00:24:28.553 16:16:56 -- target/dif.sh@132 -- # destroy_subsystems 0 00:24:28.553 16:16:56 -- target/dif.sh@43 -- # local sub 00:24:28.553 16:16:56 -- target/dif.sh@45 -- # for sub in "$@" 00:24:28.553 16:16:56 -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:28.553 16:16:56 -- target/dif.sh@36 -- # local sub_id=0 00:24:28.553 16:16:56 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:28.553 16:16:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.553 16:16:56 -- common/autotest_common.sh@10 -- # set +x 00:24:28.553 16:16:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.553 16:16:56 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:28.553 16:16:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.553 16:16:56 -- common/autotest_common.sh@10 -- # set +x 00:24:28.553 16:16:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.553 00:24:28.553 real 0m10.928s 00:24:28.553 user 0m28.396s 00:24:28.553 sys 0m2.200s 00:24:28.553 16:16:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:28.553 ************************************ 00:24:28.553 END TEST fio_dif_digest 00:24:28.553 ************************************ 00:24:28.553 16:16:56 -- common/autotest_common.sh@10 -- # set +x 00:24:28.553 16:16:56 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:24:28.553 16:16:56 -- target/dif.sh@147 -- # nvmftestfini 00:24:28.553 16:16:56 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:28.553 16:16:56 -- nvmf/common.sh@117 -- # sync 00:24:28.553 16:16:56 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:28.553 16:16:56 -- nvmf/common.sh@120 -- # set +e 00:24:28.553 16:16:56 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:28.553 16:16:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:28.553 rmmod nvme_tcp 00:24:28.553 rmmod nvme_fabrics 00:24:28.553 16:16:56 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:28.553 16:16:56 -- nvmf/common.sh@124 -- # set -e 00:24:28.553 16:16:56 -- nvmf/common.sh@125 -- # return 0 00:24:28.553 16:16:56 -- nvmf/common.sh@478 -- # '[' -n 94048 ']' 00:24:28.553 16:16:56 -- nvmf/common.sh@479 -- # killprocess 94048 00:24:28.553 16:16:56 -- common/autotest_common.sh@936 -- # '[' -z 94048 ']' 00:24:28.553 16:16:56 -- common/autotest_common.sh@940 -- # kill -0 94048 00:24:28.553 16:16:56 -- common/autotest_common.sh@941 -- # uname 00:24:28.553 16:16:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:28.553 16:16:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 94048 00:24:28.553 16:16:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:28.553 16:16:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:28.553 16:16:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 94048' 00:24:28.553 killing process with pid 94048 00:24:28.553 16:16:56 -- common/autotest_common.sh@955 -- # kill 94048 00:24:28.553 16:16:56 -- common/autotest_common.sh@960 -- # wait 94048 00:24:28.553 16:16:56 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:24:28.553 16:16:56 -- nvmf/common.sh@482 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:28.553 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:28.553 Waiting for block devices as requested 00:24:28.553 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:28.553 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:28.553 16:16:57 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:28.553 16:16:57 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:28.553 16:16:57 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:28.553 16:16:57 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:28.553 16:16:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.553 16:16:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:28.553 16:16:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.553 16:16:57 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:28.553 ************************************ 00:24:28.553 END TEST nvmf_dif 00:24:28.553 ************************************ 00:24:28.553 00:24:28.553 real 0m59.923s 00:24:28.553 user 3m45.264s 00:24:28.553 sys 0m22.547s 00:24:28.553 16:16:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:28.553 16:16:57 -- common/autotest_common.sh@10 -- # set +x 00:24:28.553 16:16:57 -- spdk/autotest.sh@291 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:24:28.553 16:16:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:28.553 16:16:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:28.553 16:16:57 -- common/autotest_common.sh@10 -- # set +x 00:24:28.553 ************************************ 00:24:28.553 START TEST nvmf_abort_qd_sizes 00:24:28.553 ************************************ 00:24:28.553 16:16:57 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:24:28.553 * Looking for test storage... 00:24:28.553 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:28.553 16:16:57 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:28.553 16:16:57 -- nvmf/common.sh@7 -- # uname -s 00:24:28.553 16:16:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:28.553 16:16:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:28.553 16:16:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:28.553 16:16:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:28.553 16:16:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:28.553 16:16:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:28.553 16:16:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:28.553 16:16:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:28.553 16:16:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:28.553 16:16:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:28.553 16:16:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5cf30adc-e0ee-4255-9853-178d7d30983a 00:24:28.553 16:16:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=5cf30adc-e0ee-4255-9853-178d7d30983a 00:24:28.553 16:16:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:28.553 16:16:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:28.553 16:16:57 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:28.554 16:16:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:28.554 16:16:57 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:28.554 16:16:57 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:28.554 16:16:57 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:28.554 16:16:57 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:28.554 16:16:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.554 16:16:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.554 16:16:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.554 16:16:57 -- paths/export.sh@5 -- # export PATH 00:24:28.554 16:16:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.554 16:16:57 -- nvmf/common.sh@47 -- # : 0 00:24:28.554 16:16:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:28.554 16:16:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:28.554 16:16:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:28.554 16:16:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:28.554 16:16:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:28.554 16:16:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:28.554 16:16:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:28.554 16:16:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:28.554 16:16:57 -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:24:28.554 16:16:57 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:28.554 16:16:57 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:28.554 16:16:57 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:28.554 16:16:57 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:28.554 16:16:57 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:28.554 16:16:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.554 16:16:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:28.554 16:16:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.554 16:16:57 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:24:28.554 16:16:57 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:24:28.554 16:16:57 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:24:28.554 16:16:57 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:24:28.554 16:16:57 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:24:28.554 16:16:57 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:24:28.554 16:16:57 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:28.554 16:16:57 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:28.554 16:16:57 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:28.554 16:16:57 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:28.554 16:16:57 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:28.554 16:16:57 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:28.554 16:16:57 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:28.554 16:16:57 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:28.554 16:16:57 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:28.554 16:16:57 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:28.554 16:16:57 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:28.554 16:16:57 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:28.554 16:16:57 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:28.554 16:16:57 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:28.554 Cannot find device "nvmf_tgt_br" 00:24:28.554 16:16:57 -- nvmf/common.sh@155 -- # true 00:24:28.554 16:16:57 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:28.554 Cannot find device "nvmf_tgt_br2" 00:24:28.554 16:16:57 -- nvmf/common.sh@156 -- # true 00:24:28.554 16:16:57 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:28.554 16:16:57 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:28.554 Cannot find device "nvmf_tgt_br" 00:24:28.554 16:16:58 -- nvmf/common.sh@158 -- # true 00:24:28.554 16:16:58 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:28.554 Cannot find device "nvmf_tgt_br2" 00:24:28.554 16:16:58 -- nvmf/common.sh@159 -- # true 00:24:28.554 16:16:58 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:28.554 16:16:58 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:28.554 16:16:58 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:28.554 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:28.554 16:16:58 -- nvmf/common.sh@162 -- # true 00:24:28.554 16:16:58 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:28.554 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:28.554 16:16:58 -- nvmf/common.sh@163 -- # true 00:24:28.554 16:16:58 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:28.554 16:16:58 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:28.554 16:16:58 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:28.554 16:16:58 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:28.554 16:16:58 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:28.554 16:16:58 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:28.554 16:16:58 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:28.554 16:16:58 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:28.554 16:16:58 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:28.554 16:16:58 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:28.554 16:16:58 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:28.554 16:16:58 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:28.554 16:16:58 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:28.554 16:16:58 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:28.554 16:16:58 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:28.554 16:16:58 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:28.554 16:16:58 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:28.554 16:16:58 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:28.554 16:16:58 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:28.554 16:16:58 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:28.554 16:16:58 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:28.554 16:16:58 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:28.554 16:16:58 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:28.554 16:16:58 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:28.554 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:28.554 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:24:28.554 00:24:28.554 --- 10.0.0.2 ping statistics --- 00:24:28.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.554 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:24:28.554 16:16:58 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:28.554 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:28.554 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:24:28.554 00:24:28.554 --- 10.0.0.3 ping statistics --- 00:24:28.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.554 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:24:28.554 16:16:58 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:28.554 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:28.554 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:24:28.554 00:24:28.554 --- 10.0.0.1 ping statistics --- 00:24:28.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.554 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:24:28.554 16:16:58 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:28.554 16:16:58 -- nvmf/common.sh@422 -- # return 0 00:24:28.554 16:16:58 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:24:28.554 16:16:58 -- nvmf/common.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:29.119 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:29.377 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:29.377 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:29.377 16:16:59 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:29.377 16:16:59 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:29.377 16:16:59 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:29.377 16:16:59 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:29.377 16:16:59 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:29.377 16:16:59 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:29.377 16:16:59 -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:24:29.377 16:16:59 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:29.377 16:16:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:29.377 16:16:59 -- common/autotest_common.sh@10 -- # set +x 00:24:29.377 16:16:59 -- nvmf/common.sh@470 -- # nvmfpid=95418 00:24:29.377 16:16:59 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:24:29.377 16:16:59 -- nvmf/common.sh@471 -- # waitforlisten 95418 00:24:29.377 16:16:59 -- common/autotest_common.sh@817 -- # '[' -z 95418 ']' 00:24:29.377 16:16:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:29.377 16:16:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:29.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:29.377 16:16:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:29.377 16:16:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:29.377 16:16:59 -- common/autotest_common.sh@10 -- # set +x 00:24:29.377 [2024-04-15 16:16:59.294550] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:24:29.377 [2024-04-15 16:16:59.294637] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:29.636 [2024-04-15 16:16:59.440649] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:29.636 [2024-04-15 16:16:59.499953] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:29.636 [2024-04-15 16:16:59.500012] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:29.636 [2024-04-15 16:16:59.500029] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:29.636 [2024-04-15 16:16:59.500042] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:29.636 [2024-04-15 16:16:59.500054] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:29.636 [2024-04-15 16:16:59.501173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:29.636 [2024-04-15 16:16:59.501315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:29.636 [2024-04-15 16:16:59.502002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:29.636 [2024-04-15 16:16:59.502549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:30.569 16:17:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:30.569 16:17:00 -- common/autotest_common.sh@850 -- # return 0 00:24:30.569 16:17:00 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:30.569 16:17:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:30.569 16:17:00 -- common/autotest_common.sh@10 -- # set +x 00:24:30.569 16:17:00 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:30.569 16:17:00 -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:24:30.569 16:17:00 -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:24:30.569 16:17:00 -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:24:30.569 16:17:00 -- scripts/common.sh@309 -- # local bdf bdfs 00:24:30.569 16:17:00 -- scripts/common.sh@310 -- # local nvmes 00:24:30.569 16:17:00 -- scripts/common.sh@312 -- # [[ -n '' ]] 00:24:30.570 16:17:00 -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:24:30.570 16:17:00 -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:24:30.570 16:17:00 -- scripts/common.sh@295 -- # local bdf= 00:24:30.570 16:17:00 -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:24:30.570 16:17:00 -- scripts/common.sh@230 -- # local class 00:24:30.570 16:17:00 -- scripts/common.sh@231 -- # local subclass 00:24:30.570 16:17:00 -- scripts/common.sh@232 -- # local progif 00:24:30.570 16:17:00 -- scripts/common.sh@233 -- # printf %02x 1 00:24:30.570 16:17:00 -- scripts/common.sh@233 -- # class=01 00:24:30.570 16:17:00 -- scripts/common.sh@234 -- # printf %02x 8 00:24:30.570 16:17:00 -- scripts/common.sh@234 -- # subclass=08 00:24:30.570 16:17:00 -- scripts/common.sh@235 -- # printf %02x 2 00:24:30.570 16:17:00 -- scripts/common.sh@235 -- # progif=02 00:24:30.570 16:17:00 -- scripts/common.sh@237 -- # hash lspci 00:24:30.570 16:17:00 -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:24:30.570 16:17:00 -- scripts/common.sh@239 -- # lspci -mm -n -D 00:24:30.570 16:17:00 -- scripts/common.sh@240 -- # grep -i -- -p02 00:24:30.570 16:17:00 -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:24:30.570 16:17:00 -- scripts/common.sh@242 -- # tr -d '"' 00:24:30.570 16:17:00 -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:24:30.570 16:17:00 -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:24:30.570 16:17:00 -- scripts/common.sh@15 -- # local i 00:24:30.570 16:17:00 -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:24:30.570 16:17:00 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:24:30.570 16:17:00 -- scripts/common.sh@24 -- # return 0 00:24:30.570 16:17:00 -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:24:30.570 16:17:00 -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:24:30.570 16:17:00 -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:24:30.570 16:17:00 -- scripts/common.sh@15 -- # local i 00:24:30.570 16:17:00 -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:24:30.570 16:17:00 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:24:30.570 16:17:00 -- scripts/common.sh@24 -- # return 0 00:24:30.570 16:17:00 -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:24:30.570 16:17:00 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:24:30.570 16:17:00 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:24:30.570 16:17:00 -- scripts/common.sh@320 -- # uname -s 00:24:30.570 16:17:00 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:24:30.570 16:17:00 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:24:30.570 16:17:00 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:24:30.570 16:17:00 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:24:30.570 16:17:00 -- scripts/common.sh@320 -- # uname -s 00:24:30.570 16:17:00 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:24:30.570 16:17:00 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:24:30.570 16:17:00 -- scripts/common.sh@325 -- # (( 2 )) 00:24:30.570 16:17:00 -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:24:30.570 16:17:00 -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:24:30.570 16:17:00 -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:24:30.570 16:17:00 -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:24:30.570 16:17:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:30.570 16:17:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:30.570 16:17:00 -- common/autotest_common.sh@10 -- # set +x 00:24:30.570 ************************************ 00:24:30.570 START TEST spdk_target_abort 00:24:30.570 ************************************ 00:24:30.570 16:17:00 -- common/autotest_common.sh@1111 -- # spdk_target 00:24:30.570 16:17:00 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:24:30.570 16:17:00 -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:24:30.570 16:17:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.570 16:17:00 -- common/autotest_common.sh@10 -- # set +x 00:24:30.867 spdk_targetn1 00:24:30.867 16:17:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.867 16:17:00 -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:30.867 16:17:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.867 16:17:00 -- common/autotest_common.sh@10 -- # set +x 00:24:30.867 [2024-04-15 16:17:00.545951] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:30.867 16:17:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.867 16:17:00 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:24:30.867 16:17:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.867 16:17:00 -- common/autotest_common.sh@10 -- # set +x 00:24:30.867 16:17:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.867 16:17:00 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:24:30.867 16:17:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.867 16:17:00 -- common/autotest_common.sh@10 -- # set +x 00:24:30.867 16:17:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.867 16:17:00 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:24:30.867 16:17:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.867 16:17:00 -- common/autotest_common.sh@10 -- # set +x 00:24:30.867 [2024-04-15 16:17:00.594310] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:30.867 16:17:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.867 16:17:00 -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:24:30.867 16:17:00 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:24:30.867 16:17:00 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:24:30.867 16:17:00 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:24:30.867 16:17:00 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:24:30.867 16:17:00 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:24:30.867 16:17:00 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:24:30.867 16:17:00 -- target/abort_qd_sizes.sh@24 -- # local target r 00:24:30.867 16:17:00 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:24:30.867 16:17:00 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:30.867 16:17:00 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:24:30.867 16:17:00 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:30.867 16:17:00 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:24:30.868 16:17:00 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:30.868 16:17:00 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:24:30.868 16:17:00 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:30.868 16:17:00 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:30.868 16:17:00 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:30.868 16:17:00 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:30.868 16:17:00 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:30.868 16:17:00 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:34.147 Initializing NVMe Controllers 00:24:34.147 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:24:34.147 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:34.147 Initialization complete. Launching workers. 00:24:34.147 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12000, failed: 0 00:24:34.147 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 812, failed to submit 11188 00:24:34.147 success 592, unsuccess 220, failed 0 00:24:34.147 16:17:03 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:34.147 16:17:03 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:37.468 Initializing NVMe Controllers 00:24:37.468 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:24:37.468 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:37.468 Initialization complete. Launching workers. 00:24:37.468 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 2808, failed: 0 00:24:37.468 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 478, failed to submit 2330 00:24:37.468 success 194, unsuccess 284, failed 0 00:24:37.468 16:17:07 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:37.468 16:17:07 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:40.752 Initializing NVMe Controllers 00:24:40.752 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:24:40.752 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:40.752 Initialization complete. Launching workers. 00:24:40.752 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 25698, failed: 0 00:24:40.752 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1049, failed to submit 24649 00:24:40.752 success 97, unsuccess 952, failed 0 00:24:40.752 16:17:10 -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:24:40.752 16:17:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.752 16:17:10 -- common/autotest_common.sh@10 -- # set +x 00:24:40.752 16:17:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.752 16:17:10 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:24:40.752 16:17:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.752 16:17:10 -- common/autotest_common.sh@10 -- # set +x 00:24:41.011 16:17:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:41.011 16:17:10 -- target/abort_qd_sizes.sh@61 -- # killprocess 95418 00:24:41.011 16:17:10 -- common/autotest_common.sh@936 -- # '[' -z 95418 ']' 00:24:41.011 16:17:10 -- common/autotest_common.sh@940 -- # kill -0 95418 00:24:41.011 16:17:10 -- common/autotest_common.sh@941 -- # uname 00:24:41.011 16:17:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:41.011 16:17:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95418 00:24:41.011 16:17:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:41.011 16:17:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:41.011 16:17:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95418' 00:24:41.011 killing process with pid 95418 00:24:41.011 16:17:10 -- common/autotest_common.sh@955 -- # kill 95418 00:24:41.011 16:17:10 -- common/autotest_common.sh@960 -- # wait 95418 00:24:41.269 00:24:41.269 real 0m10.596s 00:24:41.269 user 0m41.702s 00:24:41.269 sys 0m3.059s 00:24:41.269 16:17:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:41.269 16:17:11 -- common/autotest_common.sh@10 -- # set +x 00:24:41.269 ************************************ 00:24:41.269 END TEST spdk_target_abort 00:24:41.269 ************************************ 00:24:41.269 16:17:11 -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:24:41.269 16:17:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:41.269 16:17:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:41.269 16:17:11 -- common/autotest_common.sh@10 -- # set +x 00:24:41.269 ************************************ 00:24:41.269 START TEST kernel_target_abort 00:24:41.269 ************************************ 00:24:41.269 16:17:11 -- common/autotest_common.sh@1111 -- # kernel_target 00:24:41.269 16:17:11 -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:24:41.269 16:17:11 -- nvmf/common.sh@717 -- # local ip 00:24:41.269 16:17:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:41.269 16:17:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:41.269 16:17:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:41.269 16:17:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:41.269 16:17:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:41.269 16:17:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:41.269 16:17:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:41.269 16:17:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:41.269 16:17:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:41.269 16:17:11 -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:41.269 16:17:11 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:41.269 16:17:11 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:24:41.269 16:17:11 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:41.269 16:17:11 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:41.269 16:17:11 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:41.269 16:17:11 -- nvmf/common.sh@628 -- # local block nvme 00:24:41.269 16:17:11 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:24:41.269 16:17:11 -- nvmf/common.sh@631 -- # modprobe nvmet 00:24:41.269 16:17:11 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:41.269 16:17:11 -- nvmf/common.sh@636 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:41.836 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:41.836 Waiting for block devices as requested 00:24:41.836 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:41.836 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:42.094 16:17:11 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:24:42.094 16:17:11 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:42.094 16:17:11 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:24:42.094 16:17:11 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:24:42.094 16:17:11 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:42.094 16:17:11 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:42.094 16:17:11 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:24:42.094 16:17:11 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:42.094 16:17:11 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:24:42.094 No valid GPT data, bailing 00:24:42.094 16:17:11 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:42.094 16:17:11 -- scripts/common.sh@391 -- # pt= 00:24:42.094 16:17:11 -- scripts/common.sh@392 -- # return 1 00:24:42.094 16:17:11 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:24:42.094 16:17:11 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:24:42.094 16:17:11 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n2 ]] 00:24:42.094 16:17:11 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n2 00:24:42.094 16:17:11 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:24:42.094 16:17:11 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:24:42.094 16:17:11 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:42.094 16:17:11 -- nvmf/common.sh@642 -- # block_in_use nvme0n2 00:24:42.094 16:17:11 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:24:42.094 16:17:11 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:24:42.094 No valid GPT data, bailing 00:24:42.094 16:17:11 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:24:42.094 16:17:12 -- scripts/common.sh@391 -- # pt= 00:24:42.094 16:17:12 -- scripts/common.sh@392 -- # return 1 00:24:42.094 16:17:12 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n2 00:24:42.094 16:17:12 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:24:42.094 16:17:12 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n3 ]] 00:24:42.094 16:17:12 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n3 00:24:42.094 16:17:12 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:24:42.094 16:17:12 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:24:42.094 16:17:12 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:42.094 16:17:12 -- nvmf/common.sh@642 -- # block_in_use nvme0n3 00:24:42.094 16:17:12 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:24:42.094 16:17:12 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:24:42.353 No valid GPT data, bailing 00:24:42.353 16:17:12 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:24:42.353 16:17:12 -- scripts/common.sh@391 -- # pt= 00:24:42.353 16:17:12 -- scripts/common.sh@392 -- # return 1 00:24:42.353 16:17:12 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n3 00:24:42.353 16:17:12 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:24:42.353 16:17:12 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme1n1 ]] 00:24:42.353 16:17:12 -- nvmf/common.sh@641 -- # is_block_zoned nvme1n1 00:24:42.353 16:17:12 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:24:42.353 16:17:12 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:24:42.353 16:17:12 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:42.353 16:17:12 -- nvmf/common.sh@642 -- # block_in_use nvme1n1 00:24:42.353 16:17:12 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:24:42.353 16:17:12 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:24:42.353 No valid GPT data, bailing 00:24:42.353 16:17:12 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:24:42.353 16:17:12 -- scripts/common.sh@391 -- # pt= 00:24:42.353 16:17:12 -- scripts/common.sh@392 -- # return 1 00:24:42.353 16:17:12 -- nvmf/common.sh@642 -- # nvme=/dev/nvme1n1 00:24:42.353 16:17:12 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme1n1 ]] 00:24:42.353 16:17:12 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:42.353 16:17:12 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:42.353 16:17:12 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:42.353 16:17:12 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:42.353 16:17:12 -- nvmf/common.sh@656 -- # echo 1 00:24:42.353 16:17:12 -- nvmf/common.sh@657 -- # echo /dev/nvme1n1 00:24:42.353 16:17:12 -- nvmf/common.sh@658 -- # echo 1 00:24:42.353 16:17:12 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:24:42.353 16:17:12 -- nvmf/common.sh@661 -- # echo tcp 00:24:42.353 16:17:12 -- nvmf/common.sh@662 -- # echo 4420 00:24:42.353 16:17:12 -- nvmf/common.sh@663 -- # echo ipv4 00:24:42.353 16:17:12 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:42.353 16:17:12 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5cf30adc-e0ee-4255-9853-178d7d30983a --hostid=5cf30adc-e0ee-4255-9853-178d7d30983a -a 10.0.0.1 -t tcp -s 4420 00:24:42.353 00:24:42.353 Discovery Log Number of Records 2, Generation counter 2 00:24:42.354 =====Discovery Log Entry 0====== 00:24:42.354 trtype: tcp 00:24:42.354 adrfam: ipv4 00:24:42.354 subtype: current discovery subsystem 00:24:42.354 treq: not specified, sq flow control disable supported 00:24:42.354 portid: 1 00:24:42.354 trsvcid: 4420 00:24:42.354 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:42.354 traddr: 10.0.0.1 00:24:42.354 eflags: none 00:24:42.354 sectype: none 00:24:42.354 =====Discovery Log Entry 1====== 00:24:42.354 trtype: tcp 00:24:42.354 adrfam: ipv4 00:24:42.354 subtype: nvme subsystem 00:24:42.354 treq: not specified, sq flow control disable supported 00:24:42.354 portid: 1 00:24:42.354 trsvcid: 4420 00:24:42.354 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:42.354 traddr: 10.0.0.1 00:24:42.354 eflags: none 00:24:42.354 sectype: none 00:24:42.354 16:17:12 -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:24:42.354 16:17:12 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:24:42.354 16:17:12 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:24:42.354 16:17:12 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:24:42.354 16:17:12 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:24:42.354 16:17:12 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:24:42.354 16:17:12 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:24:42.354 16:17:12 -- target/abort_qd_sizes.sh@24 -- # local target r 00:24:42.354 16:17:12 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:24:42.354 16:17:12 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:42.354 16:17:12 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:24:42.354 16:17:12 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:42.354 16:17:12 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:24:42.354 16:17:12 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:42.354 16:17:12 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:24:42.354 16:17:12 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:42.354 16:17:12 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:24:42.354 16:17:12 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:42.354 16:17:12 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:42.354 16:17:12 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:42.354 16:17:12 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:45.644 Initializing NVMe Controllers 00:24:45.644 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:45.644 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:45.644 Initialization complete. Launching workers. 00:24:45.644 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 39424, failed: 0 00:24:45.644 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 39424, failed to submit 0 00:24:45.644 success 0, unsuccess 39424, failed 0 00:24:45.644 16:17:15 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:45.644 16:17:15 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:48.933 Initializing NVMe Controllers 00:24:48.933 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:48.933 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:48.933 Initialization complete. Launching workers. 00:24:48.933 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 71945, failed: 0 00:24:48.933 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 31299, failed to submit 40646 00:24:48.933 success 0, unsuccess 31299, failed 0 00:24:48.933 16:17:18 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:48.933 16:17:18 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:52.217 Initializing NVMe Controllers 00:24:52.217 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:52.217 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:52.217 Initialization complete. Launching workers. 00:24:52.217 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 86775, failed: 0 00:24:52.217 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21678, failed to submit 65097 00:24:52.217 success 0, unsuccess 21678, failed 0 00:24:52.217 16:17:21 -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:24:52.217 16:17:21 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:52.217 16:17:21 -- nvmf/common.sh@675 -- # echo 0 00:24:52.217 16:17:21 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:52.217 16:17:21 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:52.217 16:17:21 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:52.217 16:17:21 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:52.217 16:17:21 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:24:52.217 16:17:21 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:24:52.217 16:17:21 -- nvmf/common.sh@687 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:52.782 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:54.163 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:54.163 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:54.163 00:24:54.163 real 0m12.728s 00:24:54.163 user 0m6.467s 00:24:54.163 sys 0m3.702s 00:24:54.163 16:17:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:54.163 ************************************ 00:24:54.163 END TEST kernel_target_abort 00:24:54.163 16:17:23 -- common/autotest_common.sh@10 -- # set +x 00:24:54.163 ************************************ 00:24:54.163 16:17:23 -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:54.163 16:17:23 -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:24:54.163 16:17:23 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:54.163 16:17:23 -- nvmf/common.sh@117 -- # sync 00:24:54.421 16:17:24 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:54.421 16:17:24 -- nvmf/common.sh@120 -- # set +e 00:24:54.421 16:17:24 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:54.421 16:17:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:54.421 rmmod nvme_tcp 00:24:54.421 rmmod nvme_fabrics 00:24:54.421 16:17:24 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:54.421 16:17:24 -- nvmf/common.sh@124 -- # set -e 00:24:54.421 16:17:24 -- nvmf/common.sh@125 -- # return 0 00:24:54.421 16:17:24 -- nvmf/common.sh@478 -- # '[' -n 95418 ']' 00:24:54.421 16:17:24 -- nvmf/common.sh@479 -- # killprocess 95418 00:24:54.421 16:17:24 -- common/autotest_common.sh@936 -- # '[' -z 95418 ']' 00:24:54.421 16:17:24 -- common/autotest_common.sh@940 -- # kill -0 95418 00:24:54.421 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (95418) - No such process 00:24:54.421 16:17:24 -- common/autotest_common.sh@963 -- # echo 'Process with pid 95418 is not found' 00:24:54.421 Process with pid 95418 is not found 00:24:54.421 16:17:24 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:24:54.421 16:17:24 -- nvmf/common.sh@482 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:54.679 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:54.679 Waiting for block devices as requested 00:24:54.937 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:54.937 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:54.937 16:17:24 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:54.937 16:17:24 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:54.937 16:17:24 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:54.937 16:17:24 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:54.937 16:17:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:54.937 16:17:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:54.937 16:17:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:55.195 16:17:24 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:55.195 00:24:55.195 real 0m27.139s 00:24:55.195 user 0m49.399s 00:24:55.195 sys 0m8.416s 00:24:55.195 16:17:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:55.195 16:17:24 -- common/autotest_common.sh@10 -- # set +x 00:24:55.195 ************************************ 00:24:55.195 END TEST nvmf_abort_qd_sizes 00:24:55.195 ************************************ 00:24:55.195 16:17:24 -- spdk/autotest.sh@293 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:24:55.195 16:17:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:55.195 16:17:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:55.195 16:17:24 -- common/autotest_common.sh@10 -- # set +x 00:24:55.195 ************************************ 00:24:55.195 START TEST keyring_file 00:24:55.195 ************************************ 00:24:55.195 16:17:25 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:24:55.195 * Looking for test storage... 00:24:55.195 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:24:55.195 16:17:25 -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:24:55.195 16:17:25 -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:55.195 16:17:25 -- nvmf/common.sh@7 -- # uname -s 00:24:55.195 16:17:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:55.195 16:17:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:55.195 16:17:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:55.195 16:17:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:55.195 16:17:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:55.195 16:17:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:55.195 16:17:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:55.195 16:17:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:55.195 16:17:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:55.195 16:17:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:55.196 16:17:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5cf30adc-e0ee-4255-9853-178d7d30983a 00:24:55.196 16:17:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=5cf30adc-e0ee-4255-9853-178d7d30983a 00:24:55.196 16:17:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:55.196 16:17:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:55.196 16:17:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:55.196 16:17:25 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:55.196 16:17:25 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:55.196 16:17:25 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:55.196 16:17:25 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:55.196 16:17:25 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:55.196 16:17:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.196 16:17:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.196 16:17:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.196 16:17:25 -- paths/export.sh@5 -- # export PATH 00:24:55.196 16:17:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.196 16:17:25 -- nvmf/common.sh@47 -- # : 0 00:24:55.196 16:17:25 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:55.196 16:17:25 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:55.196 16:17:25 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:55.196 16:17:25 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:55.196 16:17:25 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:55.196 16:17:25 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:55.196 16:17:25 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:55.196 16:17:25 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:55.196 16:17:25 -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:24:55.196 16:17:25 -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:24:55.196 16:17:25 -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:24:55.196 16:17:25 -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:24:55.196 16:17:25 -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:24:55.196 16:17:25 -- keyring/file.sh@24 -- # trap cleanup EXIT 00:24:55.196 16:17:25 -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:24:55.196 16:17:25 -- keyring/common.sh@15 -- # local name key digest path 00:24:55.196 16:17:25 -- keyring/common.sh@17 -- # name=key0 00:24:55.196 16:17:25 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:24:55.196 16:17:25 -- keyring/common.sh@17 -- # digest=0 00:24:55.196 16:17:25 -- keyring/common.sh@18 -- # mktemp 00:24:55.196 16:17:25 -- keyring/common.sh@18 -- # path=/tmp/tmp.cC5f7D9GgN 00:24:55.196 16:17:25 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:24:55.196 16:17:25 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:24:55.196 16:17:25 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:55.196 16:17:25 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:24:55.196 16:17:25 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:24:55.196 16:17:25 -- nvmf/common.sh@693 -- # digest=0 00:24:55.196 16:17:25 -- nvmf/common.sh@694 -- # python - 00:24:55.454 16:17:25 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.cC5f7D9GgN 00:24:55.454 16:17:25 -- keyring/common.sh@23 -- # echo /tmp/tmp.cC5f7D9GgN 00:24:55.454 16:17:25 -- keyring/file.sh@26 -- # key0path=/tmp/tmp.cC5f7D9GgN 00:24:55.454 16:17:25 -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:24:55.454 16:17:25 -- keyring/common.sh@15 -- # local name key digest path 00:24:55.454 16:17:25 -- keyring/common.sh@17 -- # name=key1 00:24:55.454 16:17:25 -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:24:55.454 16:17:25 -- keyring/common.sh@17 -- # digest=0 00:24:55.454 16:17:25 -- keyring/common.sh@18 -- # mktemp 00:24:55.454 16:17:25 -- keyring/common.sh@18 -- # path=/tmp/tmp.QNHRBP9o0c 00:24:55.454 16:17:25 -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:24:55.454 16:17:25 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:24:55.454 16:17:25 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:55.454 16:17:25 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:24:55.454 16:17:25 -- nvmf/common.sh@693 -- # key=112233445566778899aabbccddeeff00 00:24:55.454 16:17:25 -- nvmf/common.sh@693 -- # digest=0 00:24:55.454 16:17:25 -- nvmf/common.sh@694 -- # python - 00:24:55.454 16:17:25 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.QNHRBP9o0c 00:24:55.454 16:17:25 -- keyring/common.sh@23 -- # echo /tmp/tmp.QNHRBP9o0c 00:24:55.454 16:17:25 -- keyring/file.sh@27 -- # key1path=/tmp/tmp.QNHRBP9o0c 00:24:55.454 16:17:25 -- keyring/file.sh@30 -- # tgtpid=96311 00:24:55.454 16:17:25 -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:55.454 16:17:25 -- keyring/file.sh@32 -- # waitforlisten 96311 00:24:55.454 16:17:25 -- common/autotest_common.sh@817 -- # '[' -z 96311 ']' 00:24:55.454 16:17:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:55.454 16:17:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:55.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:55.454 16:17:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:55.454 16:17:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:55.454 16:17:25 -- common/autotest_common.sh@10 -- # set +x 00:24:55.454 [2024-04-15 16:17:25.355254] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:24:55.454 [2024-04-15 16:17:25.355351] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96311 ] 00:24:55.712 [2024-04-15 16:17:25.501263] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:55.712 [2024-04-15 16:17:25.554664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:56.647 16:17:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:56.647 16:17:26 -- common/autotest_common.sh@850 -- # return 0 00:24:56.647 16:17:26 -- keyring/file.sh@33 -- # rpc_cmd 00:24:56.647 16:17:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:56.647 16:17:26 -- common/autotest_common.sh@10 -- # set +x 00:24:56.647 [2024-04-15 16:17:26.391044] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:56.647 null0 00:24:56.647 [2024-04-15 16:17:26.423019] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:56.647 [2024-04-15 16:17:26.423247] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:24:56.647 [2024-04-15 16:17:26.431057] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:56.647 16:17:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:56.647 16:17:26 -- keyring/file.sh@43 -- # bperfpid=96330 00:24:56.647 16:17:26 -- keyring/file.sh@45 -- # waitforlisten 96330 /var/tmp/bperf.sock 00:24:56.647 16:17:26 -- keyring/file.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:24:56.647 16:17:26 -- common/autotest_common.sh@817 -- # '[' -z 96330 ']' 00:24:56.647 16:17:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:56.647 16:17:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:56.647 16:17:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:56.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:56.647 16:17:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:56.647 16:17:26 -- common/autotest_common.sh@10 -- # set +x 00:24:56.647 [2024-04-15 16:17:26.488960] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:24:56.647 [2024-04-15 16:17:26.489065] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96330 ] 00:24:56.973 [2024-04-15 16:17:26.634890] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:56.973 [2024-04-15 16:17:26.689423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:56.973 16:17:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:56.973 16:17:26 -- common/autotest_common.sh@850 -- # return 0 00:24:56.973 16:17:26 -- keyring/file.sh@46 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.cC5f7D9GgN 00:24:56.973 16:17:26 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.cC5f7D9GgN 00:24:57.244 16:17:27 -- keyring/file.sh@47 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.QNHRBP9o0c 00:24:57.244 16:17:27 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.QNHRBP9o0c 00:24:57.503 16:17:27 -- keyring/file.sh@48 -- # get_key key0 00:24:57.503 16:17:27 -- keyring/file.sh@48 -- # jq -r .path 00:24:57.503 16:17:27 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:57.503 16:17:27 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:57.503 16:17:27 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:58.075 16:17:27 -- keyring/file.sh@48 -- # [[ /tmp/tmp.cC5f7D9GgN == \/\t\m\p\/\t\m\p\.\c\C\5\f\7\D\9\G\g\N ]] 00:24:58.075 16:17:27 -- keyring/file.sh@49 -- # jq -r .path 00:24:58.075 16:17:27 -- keyring/file.sh@49 -- # get_key key1 00:24:58.075 16:17:27 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:58.075 16:17:27 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:58.075 16:17:27 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:58.333 16:17:28 -- keyring/file.sh@49 -- # [[ /tmp/tmp.QNHRBP9o0c == \/\t\m\p\/\t\m\p\.\Q\N\H\R\B\P\9\o\0\c ]] 00:24:58.333 16:17:28 -- keyring/file.sh@50 -- # get_refcnt key0 00:24:58.333 16:17:28 -- keyring/common.sh@12 -- # get_key key0 00:24:58.333 16:17:28 -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:58.333 16:17:28 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:58.334 16:17:28 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:58.334 16:17:28 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:58.591 16:17:28 -- keyring/file.sh@50 -- # (( 1 == 1 )) 00:24:58.591 16:17:28 -- keyring/file.sh@51 -- # get_refcnt key1 00:24:58.591 16:17:28 -- keyring/common.sh@12 -- # get_key key1 00:24:58.591 16:17:28 -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:58.591 16:17:28 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:58.591 16:17:28 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:58.591 16:17:28 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:58.849 16:17:28 -- keyring/file.sh@51 -- # (( 1 == 1 )) 00:24:58.849 16:17:28 -- keyring/file.sh@54 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:58.849 16:17:28 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:59.108 [2024-04-15 16:17:29.018161] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:59.367 nvme0n1 00:24:59.367 16:17:29 -- keyring/file.sh@56 -- # get_refcnt key0 00:24:59.367 16:17:29 -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:59.367 16:17:29 -- keyring/common.sh@12 -- # get_key key0 00:24:59.367 16:17:29 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:59.367 16:17:29 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:59.367 16:17:29 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:59.625 16:17:29 -- keyring/file.sh@56 -- # (( 2 == 2 )) 00:24:59.625 16:17:29 -- keyring/file.sh@57 -- # get_refcnt key1 00:24:59.625 16:17:29 -- keyring/common.sh@12 -- # get_key key1 00:24:59.625 16:17:29 -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:59.625 16:17:29 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:59.625 16:17:29 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:59.625 16:17:29 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:59.883 16:17:29 -- keyring/file.sh@57 -- # (( 1 == 1 )) 00:24:59.883 16:17:29 -- keyring/file.sh@59 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:59.883 Running I/O for 1 seconds... 00:25:01.257 00:25:01.257 Latency(us) 00:25:01.257 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:01.257 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:25:01.257 nvme0n1 : 1.00 13488.39 52.69 0.00 0.00 9464.22 4837.18 19598.38 00:25:01.257 =================================================================================================================== 00:25:01.257 Total : 13488.39 52.69 0.00 0.00 9464.22 4837.18 19598.38 00:25:01.257 0 00:25:01.257 16:17:30 -- keyring/file.sh@61 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:25:01.257 16:17:30 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:25:01.257 16:17:31 -- keyring/file.sh@62 -- # get_refcnt key0 00:25:01.257 16:17:31 -- keyring/common.sh@12 -- # get_key key0 00:25:01.257 16:17:31 -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:01.257 16:17:31 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:01.257 16:17:31 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:01.257 16:17:31 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:01.515 16:17:31 -- keyring/file.sh@62 -- # (( 1 == 1 )) 00:25:01.515 16:17:31 -- keyring/file.sh@63 -- # get_refcnt key1 00:25:01.515 16:17:31 -- keyring/common.sh@12 -- # get_key key1 00:25:01.515 16:17:31 -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:01.515 16:17:31 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:01.515 16:17:31 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:01.515 16:17:31 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:01.774 16:17:31 -- keyring/file.sh@63 -- # (( 1 == 1 )) 00:25:01.774 16:17:31 -- keyring/file.sh@66 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:01.774 16:17:31 -- common/autotest_common.sh@638 -- # local es=0 00:25:01.774 16:17:31 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:01.774 16:17:31 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:25:01.774 16:17:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:01.774 16:17:31 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:25:01.774 16:17:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:01.774 16:17:31 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:01.774 16:17:31 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:02.031 [2024-04-15 16:17:31.998024] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:02.031 [2024-04-15 16:17:31.998687] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x179c370 (107): Transport endpoint is not connected 00:25:02.291 [2024-04-15 16:17:31.999675] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x179c370 (9): Bad file descriptor 00:25:02.291 [2024-04-15 16:17:32.000673] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:02.291 [2024-04-15 16:17:32.000696] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:25:02.291 [2024-04-15 16:17:32.000707] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:02.291 request: 00:25:02.291 { 00:25:02.291 "name": "nvme0", 00:25:02.291 "trtype": "tcp", 00:25:02.291 "traddr": "127.0.0.1", 00:25:02.291 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:02.291 "adrfam": "ipv4", 00:25:02.291 "trsvcid": "4420", 00:25:02.291 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:02.291 "psk": "key1", 00:25:02.291 "method": "bdev_nvme_attach_controller", 00:25:02.291 "req_id": 1 00:25:02.291 } 00:25:02.291 Got JSON-RPC error response 00:25:02.291 response: 00:25:02.291 { 00:25:02.291 "code": -32602, 00:25:02.291 "message": "Invalid parameters" 00:25:02.291 } 00:25:02.291 16:17:32 -- common/autotest_common.sh@641 -- # es=1 00:25:02.291 16:17:32 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:25:02.291 16:17:32 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:25:02.291 16:17:32 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:25:02.291 16:17:32 -- keyring/file.sh@68 -- # get_refcnt key0 00:25:02.291 16:17:32 -- keyring/common.sh@12 -- # get_key key0 00:25:02.291 16:17:32 -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:02.291 16:17:32 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:02.291 16:17:32 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:02.291 16:17:32 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:02.549 16:17:32 -- keyring/file.sh@68 -- # (( 1 == 1 )) 00:25:02.549 16:17:32 -- keyring/file.sh@69 -- # get_refcnt key1 00:25:02.549 16:17:32 -- keyring/common.sh@12 -- # get_key key1 00:25:02.549 16:17:32 -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:02.549 16:17:32 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:02.549 16:17:32 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:02.549 16:17:32 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:02.807 16:17:32 -- keyring/file.sh@69 -- # (( 1 == 1 )) 00:25:02.807 16:17:32 -- keyring/file.sh@72 -- # bperf_cmd keyring_file_remove_key key0 00:25:02.807 16:17:32 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:25:03.088 16:17:32 -- keyring/file.sh@73 -- # bperf_cmd keyring_file_remove_key key1 00:25:03.088 16:17:32 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:25:03.656 16:17:33 -- keyring/file.sh@74 -- # bperf_cmd keyring_get_keys 00:25:03.656 16:17:33 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:03.656 16:17:33 -- keyring/file.sh@74 -- # jq length 00:25:03.656 16:17:33 -- keyring/file.sh@74 -- # (( 0 == 0 )) 00:25:03.656 16:17:33 -- keyring/file.sh@77 -- # chmod 0660 /tmp/tmp.cC5f7D9GgN 00:25:03.656 16:17:33 -- keyring/file.sh@78 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.cC5f7D9GgN 00:25:03.656 16:17:33 -- common/autotest_common.sh@638 -- # local es=0 00:25:03.656 16:17:33 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.cC5f7D9GgN 00:25:03.656 16:17:33 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:25:03.656 16:17:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:03.656 16:17:33 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:25:03.656 16:17:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:03.656 16:17:33 -- common/autotest_common.sh@641 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.cC5f7D9GgN 00:25:03.656 16:17:33 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.cC5f7D9GgN 00:25:03.914 [2024-04-15 16:17:33.815847] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.cC5f7D9GgN': 0100660 00:25:03.914 [2024-04-15 16:17:33.815894] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:25:03.914 request: 00:25:03.914 { 00:25:03.914 "name": "key0", 00:25:03.914 "path": "/tmp/tmp.cC5f7D9GgN", 00:25:03.914 "method": "keyring_file_add_key", 00:25:03.914 "req_id": 1 00:25:03.914 } 00:25:03.914 Got JSON-RPC error response 00:25:03.914 response: 00:25:03.914 { 00:25:03.914 "code": -1, 00:25:03.914 "message": "Operation not permitted" 00:25:03.914 } 00:25:03.914 16:17:33 -- common/autotest_common.sh@641 -- # es=1 00:25:03.914 16:17:33 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:25:03.914 16:17:33 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:25:03.914 16:17:33 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:25:03.914 16:17:33 -- keyring/file.sh@81 -- # chmod 0600 /tmp/tmp.cC5f7D9GgN 00:25:03.914 16:17:33 -- keyring/file.sh@82 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.cC5f7D9GgN 00:25:03.914 16:17:33 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.cC5f7D9GgN 00:25:04.481 16:17:34 -- keyring/file.sh@83 -- # rm -f /tmp/tmp.cC5f7D9GgN 00:25:04.481 16:17:34 -- keyring/file.sh@85 -- # get_refcnt key0 00:25:04.481 16:17:34 -- keyring/common.sh@12 -- # get_key key0 00:25:04.481 16:17:34 -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:04.481 16:17:34 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:04.481 16:17:34 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:04.481 16:17:34 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:04.481 16:17:34 -- keyring/file.sh@85 -- # (( 1 == 1 )) 00:25:04.481 16:17:34 -- keyring/file.sh@87 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:04.481 16:17:34 -- common/autotest_common.sh@638 -- # local es=0 00:25:04.481 16:17:34 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:04.481 16:17:34 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:25:04.481 16:17:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:04.481 16:17:34 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:25:04.481 16:17:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:04.481 16:17:34 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:04.481 16:17:34 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:04.835 [2024-04-15 16:17:34.684285] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.cC5f7D9GgN': No such file or directory 00:25:04.835 [2024-04-15 16:17:34.684345] nvme_tcp.c:2570:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:25:04.835 [2024-04-15 16:17:34.684388] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:25:04.835 [2024-04-15 16:17:34.684398] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:04.835 [2024-04-15 16:17:34.684408] bdev_nvme.c:6183:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:25:04.835 request: 00:25:04.835 { 00:25:04.835 "name": "nvme0", 00:25:04.835 "trtype": "tcp", 00:25:04.835 "traddr": "127.0.0.1", 00:25:04.835 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:04.835 "adrfam": "ipv4", 00:25:04.835 "trsvcid": "4420", 00:25:04.835 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:04.835 "psk": "key0", 00:25:04.835 "method": "bdev_nvme_attach_controller", 00:25:04.835 "req_id": 1 00:25:04.835 } 00:25:04.835 Got JSON-RPC error response 00:25:04.835 response: 00:25:04.835 { 00:25:04.835 "code": -19, 00:25:04.835 "message": "No such device" 00:25:04.835 } 00:25:04.835 16:17:34 -- common/autotest_common.sh@641 -- # es=1 00:25:04.835 16:17:34 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:25:04.835 16:17:34 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:25:04.835 16:17:34 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:25:04.835 16:17:34 -- keyring/file.sh@89 -- # bperf_cmd keyring_file_remove_key key0 00:25:04.835 16:17:34 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:25:05.106 16:17:35 -- keyring/file.sh@92 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:25:05.106 16:17:35 -- keyring/common.sh@15 -- # local name key digest path 00:25:05.106 16:17:35 -- keyring/common.sh@17 -- # name=key0 00:25:05.106 16:17:35 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:25:05.106 16:17:35 -- keyring/common.sh@17 -- # digest=0 00:25:05.106 16:17:35 -- keyring/common.sh@18 -- # mktemp 00:25:05.106 16:17:35 -- keyring/common.sh@18 -- # path=/tmp/tmp.xZ80omkYRT 00:25:05.106 16:17:35 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:25:05.106 16:17:35 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:25:05.106 16:17:35 -- nvmf/common.sh@691 -- # local prefix key digest 00:25:05.106 16:17:35 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:25:05.106 16:17:35 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:25:05.106 16:17:35 -- nvmf/common.sh@693 -- # digest=0 00:25:05.106 16:17:35 -- nvmf/common.sh@694 -- # python - 00:25:05.106 16:17:35 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.xZ80omkYRT 00:25:05.106 16:17:35 -- keyring/common.sh@23 -- # echo /tmp/tmp.xZ80omkYRT 00:25:05.106 16:17:35 -- keyring/file.sh@92 -- # key0path=/tmp/tmp.xZ80omkYRT 00:25:05.106 16:17:35 -- keyring/file.sh@93 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.xZ80omkYRT 00:25:05.106 16:17:35 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.xZ80omkYRT 00:25:05.672 16:17:35 -- keyring/file.sh@94 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:05.672 16:17:35 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:05.930 nvme0n1 00:25:05.930 16:17:35 -- keyring/file.sh@96 -- # get_refcnt key0 00:25:05.930 16:17:35 -- keyring/common.sh@12 -- # get_key key0 00:25:05.930 16:17:35 -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:05.930 16:17:35 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:05.930 16:17:35 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:05.930 16:17:35 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:06.188 16:17:36 -- keyring/file.sh@96 -- # (( 2 == 2 )) 00:25:06.188 16:17:36 -- keyring/file.sh@97 -- # bperf_cmd keyring_file_remove_key key0 00:25:06.188 16:17:36 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:25:06.446 16:17:36 -- keyring/file.sh@98 -- # jq -r .removed 00:25:06.446 16:17:36 -- keyring/file.sh@98 -- # get_key key0 00:25:06.446 16:17:36 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:06.446 16:17:36 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:06.446 16:17:36 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:07.012 16:17:36 -- keyring/file.sh@98 -- # [[ true == \t\r\u\e ]] 00:25:07.012 16:17:36 -- keyring/file.sh@99 -- # get_refcnt key0 00:25:07.012 16:17:36 -- keyring/common.sh@12 -- # get_key key0 00:25:07.012 16:17:36 -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:07.012 16:17:36 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:07.012 16:17:36 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:07.012 16:17:36 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:07.269 16:17:37 -- keyring/file.sh@99 -- # (( 1 == 1 )) 00:25:07.269 16:17:37 -- keyring/file.sh@100 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:25:07.269 16:17:37 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:25:07.527 16:17:37 -- keyring/file.sh@101 -- # bperf_cmd keyring_get_keys 00:25:07.527 16:17:37 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:07.527 16:17:37 -- keyring/file.sh@101 -- # jq length 00:25:07.785 16:17:37 -- keyring/file.sh@101 -- # (( 0 == 0 )) 00:25:07.785 16:17:37 -- keyring/file.sh@104 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.xZ80omkYRT 00:25:07.785 16:17:37 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.xZ80omkYRT 00:25:08.042 16:17:37 -- keyring/file.sh@105 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.QNHRBP9o0c 00:25:08.042 16:17:37 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.QNHRBP9o0c 00:25:08.300 16:17:38 -- keyring/file.sh@106 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:08.300 16:17:38 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:08.558 nvme0n1 00:25:08.558 16:17:38 -- keyring/file.sh@109 -- # bperf_cmd save_config 00:25:08.558 16:17:38 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:25:09.127 16:17:38 -- keyring/file.sh@109 -- # config='{ 00:25:09.127 "subsystems": [ 00:25:09.127 { 00:25:09.127 "subsystem": "keyring", 00:25:09.127 "config": [ 00:25:09.127 { 00:25:09.127 "method": "keyring_file_add_key", 00:25:09.127 "params": { 00:25:09.127 "name": "key0", 00:25:09.127 "path": "/tmp/tmp.xZ80omkYRT" 00:25:09.127 } 00:25:09.127 }, 00:25:09.127 { 00:25:09.127 "method": "keyring_file_add_key", 00:25:09.127 "params": { 00:25:09.127 "name": "key1", 00:25:09.127 "path": "/tmp/tmp.QNHRBP9o0c" 00:25:09.127 } 00:25:09.127 } 00:25:09.127 ] 00:25:09.127 }, 00:25:09.127 { 00:25:09.127 "subsystem": "iobuf", 00:25:09.127 "config": [ 00:25:09.127 { 00:25:09.127 "method": "iobuf_set_options", 00:25:09.127 "params": { 00:25:09.127 "small_pool_count": 8192, 00:25:09.127 "large_pool_count": 1024, 00:25:09.127 "small_bufsize": 8192, 00:25:09.127 "large_bufsize": 135168 00:25:09.127 } 00:25:09.127 } 00:25:09.127 ] 00:25:09.127 }, 00:25:09.127 { 00:25:09.127 "subsystem": "sock", 00:25:09.127 "config": [ 00:25:09.127 { 00:25:09.127 "method": "sock_impl_set_options", 00:25:09.127 "params": { 00:25:09.128 "impl_name": "uring", 00:25:09.128 "recv_buf_size": 2097152, 00:25:09.128 "send_buf_size": 2097152, 00:25:09.128 "enable_recv_pipe": true, 00:25:09.128 "enable_quickack": false, 00:25:09.128 "enable_placement_id": 0, 00:25:09.128 "enable_zerocopy_send_server": false, 00:25:09.128 "enable_zerocopy_send_client": false, 00:25:09.128 "zerocopy_threshold": 0, 00:25:09.128 "tls_version": 0, 00:25:09.128 "enable_ktls": false 00:25:09.128 } 00:25:09.128 }, 00:25:09.128 { 00:25:09.128 "method": "sock_impl_set_options", 00:25:09.128 "params": { 00:25:09.128 "impl_name": "posix", 00:25:09.128 "recv_buf_size": 2097152, 00:25:09.128 "send_buf_size": 2097152, 00:25:09.128 "enable_recv_pipe": true, 00:25:09.128 "enable_quickack": false, 00:25:09.128 "enable_placement_id": 0, 00:25:09.128 "enable_zerocopy_send_server": true, 00:25:09.128 "enable_zerocopy_send_client": false, 00:25:09.128 "zerocopy_threshold": 0, 00:25:09.128 "tls_version": 0, 00:25:09.128 "enable_ktls": false 00:25:09.128 } 00:25:09.128 }, 00:25:09.128 { 00:25:09.128 "method": "sock_impl_set_options", 00:25:09.128 "params": { 00:25:09.128 "impl_name": "ssl", 00:25:09.128 "recv_buf_size": 4096, 00:25:09.128 "send_buf_size": 4096, 00:25:09.128 "enable_recv_pipe": true, 00:25:09.128 "enable_quickack": false, 00:25:09.128 "enable_placement_id": 0, 00:25:09.128 "enable_zerocopy_send_server": true, 00:25:09.128 "enable_zerocopy_send_client": false, 00:25:09.128 "zerocopy_threshold": 0, 00:25:09.128 "tls_version": 0, 00:25:09.128 "enable_ktls": false 00:25:09.128 } 00:25:09.128 } 00:25:09.128 ] 00:25:09.128 }, 00:25:09.128 { 00:25:09.128 "subsystem": "vmd", 00:25:09.128 "config": [] 00:25:09.128 }, 00:25:09.128 { 00:25:09.128 "subsystem": "accel", 00:25:09.128 "config": [ 00:25:09.128 { 00:25:09.128 "method": "accel_set_options", 00:25:09.128 "params": { 00:25:09.128 "small_cache_size": 128, 00:25:09.128 "large_cache_size": 16, 00:25:09.128 "task_count": 2048, 00:25:09.128 "sequence_count": 2048, 00:25:09.128 "buf_count": 2048 00:25:09.128 } 00:25:09.128 } 00:25:09.128 ] 00:25:09.128 }, 00:25:09.128 { 00:25:09.128 "subsystem": "bdev", 00:25:09.128 "config": [ 00:25:09.128 { 00:25:09.128 "method": "bdev_set_options", 00:25:09.128 "params": { 00:25:09.128 "bdev_io_pool_size": 65535, 00:25:09.128 "bdev_io_cache_size": 256, 00:25:09.128 "bdev_auto_examine": true, 00:25:09.128 "iobuf_small_cache_size": 128, 00:25:09.128 "iobuf_large_cache_size": 16 00:25:09.128 } 00:25:09.128 }, 00:25:09.128 { 00:25:09.128 "method": "bdev_raid_set_options", 00:25:09.128 "params": { 00:25:09.128 "process_window_size_kb": 1024 00:25:09.128 } 00:25:09.128 }, 00:25:09.128 { 00:25:09.128 "method": "bdev_iscsi_set_options", 00:25:09.128 "params": { 00:25:09.128 "timeout_sec": 30 00:25:09.128 } 00:25:09.128 }, 00:25:09.128 { 00:25:09.128 "method": "bdev_nvme_set_options", 00:25:09.128 "params": { 00:25:09.128 "action_on_timeout": "none", 00:25:09.128 "timeout_us": 0, 00:25:09.128 "timeout_admin_us": 0, 00:25:09.128 "keep_alive_timeout_ms": 10000, 00:25:09.128 "arbitration_burst": 0, 00:25:09.128 "low_priority_weight": 0, 00:25:09.128 "medium_priority_weight": 0, 00:25:09.128 "high_priority_weight": 0, 00:25:09.128 "nvme_adminq_poll_period_us": 10000, 00:25:09.128 "nvme_ioq_poll_period_us": 0, 00:25:09.128 "io_queue_requests": 512, 00:25:09.128 "delay_cmd_submit": true, 00:25:09.128 "transport_retry_count": 4, 00:25:09.128 "bdev_retry_count": 3, 00:25:09.128 "transport_ack_timeout": 0, 00:25:09.128 "ctrlr_loss_timeout_sec": 0, 00:25:09.128 "reconnect_delay_sec": 0, 00:25:09.128 "fast_io_fail_timeout_sec": 0, 00:25:09.128 "disable_auto_failback": false, 00:25:09.128 "generate_uuids": false, 00:25:09.128 "transport_tos": 0, 00:25:09.128 "nvme_error_stat": false, 00:25:09.128 "rdma_srq_size": 0, 00:25:09.128 "io_path_stat": false, 00:25:09.128 "allow_accel_sequence": false, 00:25:09.128 "rdma_max_cq_size": 0, 00:25:09.128 "rdma_cm_event_timeout_ms": 0, 00:25:09.128 "dhchap_digests": [ 00:25:09.128 "sha256", 00:25:09.128 "sha384", 00:25:09.128 "sha512" 00:25:09.128 ], 00:25:09.128 "dhchap_dhgroups": [ 00:25:09.128 "null", 00:25:09.128 "ffdhe2048", 00:25:09.128 "ffdhe3072", 00:25:09.128 "ffdhe4096", 00:25:09.128 "ffdhe6144", 00:25:09.128 "ffdhe8192" 00:25:09.128 ] 00:25:09.128 } 00:25:09.128 }, 00:25:09.128 { 00:25:09.128 "method": "bdev_nvme_attach_controller", 00:25:09.128 "params": { 00:25:09.128 "name": "nvme0", 00:25:09.128 "trtype": "TCP", 00:25:09.128 "adrfam": "IPv4", 00:25:09.128 "traddr": "127.0.0.1", 00:25:09.128 "trsvcid": "4420", 00:25:09.128 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:09.128 "prchk_reftag": false, 00:25:09.128 "prchk_guard": false, 00:25:09.128 "ctrlr_loss_timeout_sec": 0, 00:25:09.128 "reconnect_delay_sec": 0, 00:25:09.128 "fast_io_fail_timeout_sec": 0, 00:25:09.128 "psk": "key0", 00:25:09.128 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:09.128 "hdgst": false, 00:25:09.128 "ddgst": false 00:25:09.128 } 00:25:09.128 }, 00:25:09.128 { 00:25:09.128 "method": "bdev_nvme_set_hotplug", 00:25:09.128 "params": { 00:25:09.128 "period_us": 100000, 00:25:09.128 "enable": false 00:25:09.128 } 00:25:09.128 }, 00:25:09.128 { 00:25:09.128 "method": "bdev_wait_for_examine" 00:25:09.128 } 00:25:09.128 ] 00:25:09.128 }, 00:25:09.128 { 00:25:09.128 "subsystem": "nbd", 00:25:09.128 "config": [] 00:25:09.128 } 00:25:09.128 ] 00:25:09.128 }' 00:25:09.128 16:17:38 -- keyring/file.sh@111 -- # killprocess 96330 00:25:09.128 16:17:38 -- common/autotest_common.sh@936 -- # '[' -z 96330 ']' 00:25:09.128 16:17:38 -- common/autotest_common.sh@940 -- # kill -0 96330 00:25:09.128 16:17:38 -- common/autotest_common.sh@941 -- # uname 00:25:09.128 16:17:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:09.128 16:17:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 96330 00:25:09.128 killing process with pid 96330 00:25:09.128 Received shutdown signal, test time was about 1.000000 seconds 00:25:09.128 00:25:09.128 Latency(us) 00:25:09.128 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:09.128 =================================================================================================================== 00:25:09.128 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:09.128 16:17:38 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:09.128 16:17:38 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:09.128 16:17:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 96330' 00:25:09.128 16:17:38 -- common/autotest_common.sh@955 -- # kill 96330 00:25:09.128 16:17:38 -- common/autotest_common.sh@960 -- # wait 96330 00:25:09.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:09.128 16:17:39 -- keyring/file.sh@114 -- # bperfpid=96595 00:25:09.128 16:17:39 -- keyring/file.sh@112 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:25:09.128 16:17:39 -- keyring/file.sh@116 -- # waitforlisten 96595 /var/tmp/bperf.sock 00:25:09.128 16:17:39 -- common/autotest_common.sh@817 -- # '[' -z 96595 ']' 00:25:09.128 16:17:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:09.128 16:17:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:09.128 16:17:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:09.128 16:17:39 -- keyring/file.sh@112 -- # echo '{ 00:25:09.128 "subsystems": [ 00:25:09.128 { 00:25:09.128 "subsystem": "keyring", 00:25:09.128 "config": [ 00:25:09.128 { 00:25:09.128 "method": "keyring_file_add_key", 00:25:09.128 "params": { 00:25:09.128 "name": "key0", 00:25:09.128 "path": "/tmp/tmp.xZ80omkYRT" 00:25:09.128 } 00:25:09.128 }, 00:25:09.128 { 00:25:09.128 "method": "keyring_file_add_key", 00:25:09.128 "params": { 00:25:09.128 "name": "key1", 00:25:09.128 "path": "/tmp/tmp.QNHRBP9o0c" 00:25:09.128 } 00:25:09.128 } 00:25:09.128 ] 00:25:09.128 }, 00:25:09.128 { 00:25:09.128 "subsystem": "iobuf", 00:25:09.128 "config": [ 00:25:09.128 { 00:25:09.129 "method": "iobuf_set_options", 00:25:09.129 "params": { 00:25:09.129 "small_pool_count": 8192, 00:25:09.129 "large_pool_count": 1024, 00:25:09.129 "small_bufsize": 8192, 00:25:09.129 "large_bufsize": 135168 00:25:09.129 } 00:25:09.129 } 00:25:09.129 ] 00:25:09.129 }, 00:25:09.129 { 00:25:09.129 "subsystem": "sock", 00:25:09.129 "config": [ 00:25:09.129 { 00:25:09.129 "method": "sock_impl_set_options", 00:25:09.129 "params": { 00:25:09.129 "impl_name": "uring", 00:25:09.129 "recv_buf_size": 2097152, 00:25:09.129 "send_buf_size": 2097152, 00:25:09.129 "enable_recv_pipe": true, 00:25:09.129 "enable_quickack": false, 00:25:09.129 "enable_placement_id": 0, 00:25:09.129 "enable_zerocopy_send_server": false, 00:25:09.129 "enable_zerocopy_send_client": false, 00:25:09.129 "zerocopy_threshold": 0, 00:25:09.129 "tls_version": 0, 00:25:09.129 "enable_ktls": false 00:25:09.129 } 00:25:09.129 }, 00:25:09.129 { 00:25:09.129 "method": "sock_impl_set_options", 00:25:09.129 "params": { 00:25:09.129 "impl_name": "posix", 00:25:09.129 "recv_buf_size": 2097152, 00:25:09.129 "send_buf_size": 2097152, 00:25:09.129 "enable_recv_pipe": true, 00:25:09.129 "enable_quickack": false, 00:25:09.129 "enable_placement_id": 0, 00:25:09.129 "enable_zerocopy_send_server": true, 00:25:09.129 "enable_zerocopy_send_client": false, 00:25:09.129 "zerocopy_threshold": 0, 00:25:09.129 "tls_version": 0, 00:25:09.129 "enable_ktls": false 00:25:09.129 } 00:25:09.129 }, 00:25:09.129 { 00:25:09.129 "method": "sock_impl_set_options", 00:25:09.129 "params": { 00:25:09.129 "impl_name": "ssl", 00:25:09.129 "recv_buf_size": 4096, 00:25:09.129 "send_buf_size": 4096, 00:25:09.129 "enable_recv_pipe": true, 00:25:09.129 "enable_quickack": false, 00:25:09.129 "enable_placement_id": 0, 00:25:09.129 "enable_zerocopy_send_server": true, 00:25:09.129 "enable_zerocopy_send_client": false, 00:25:09.129 "zerocopy_threshold": 0, 00:25:09.129 "tls_version": 0, 00:25:09.129 "enable_ktls": false 00:25:09.129 } 00:25:09.129 } 00:25:09.129 ] 00:25:09.129 }, 00:25:09.129 { 00:25:09.129 "subsystem": "vmd", 00:25:09.129 "config": [] 00:25:09.129 }, 00:25:09.129 { 00:25:09.129 "subsystem": "accel", 00:25:09.129 "config": [ 00:25:09.129 { 00:25:09.129 "method": "accel_set_options", 00:25:09.129 "params": { 00:25:09.129 "small_cache_size": 128, 00:25:09.129 "large_cache_size": 16, 00:25:09.129 "task_count": 2048, 00:25:09.129 "sequence_count": 2048, 00:25:09.129 "buf_count": 2048 00:25:09.129 } 00:25:09.129 } 00:25:09.129 ] 00:25:09.129 }, 00:25:09.129 { 00:25:09.129 "subsystem": "bdev", 00:25:09.129 "config": [ 00:25:09.129 { 00:25:09.129 "method": "bdev_set_options", 00:25:09.129 "params": { 00:25:09.129 "bdev_io_pool_size": 65535, 00:25:09.129 "bdev_io_cache_size": 256, 00:25:09.129 "bdev_auto_examine": true, 00:25:09.129 "iobuf_small_cache_size": 128, 00:25:09.129 "iobuf_large_cache_size": 16 00:25:09.129 } 00:25:09.129 }, 00:25:09.129 { 00:25:09.129 "method": "bdev_raid_set_options", 00:25:09.129 "params": { 00:25:09.129 "process_window_size_kb": 1024 00:25:09.129 } 00:25:09.129 }, 00:25:09.129 { 00:25:09.129 "method": "bdev_iscsi_set_options", 00:25:09.129 "params": { 00:25:09.129 "timeout_sec": 30 00:25:09.129 } 00:25:09.129 }, 00:25:09.129 { 00:25:09.129 "method": "bdev_nvme_set_options", 00:25:09.129 "params": { 00:25:09.129 "action_on_timeout": "none", 00:25:09.129 "timeout_us": 0, 00:25:09.129 "timeout_admin_us": 0, 00:25:09.129 "keep_alive_timeout_ms": 10000, 00:25:09.129 "arbitration_burst": 0, 00:25:09.129 "low_priority_weight": 0, 00:25:09.129 "medium_priority_weight": 0, 00:25:09.129 "high_priority_weight": 0, 00:25:09.129 "nvme_adminq_poll_period_us": 10000, 00:25:09.129 "nvme_ioq_poll_period_us": 0, 00:25:09.129 "io_queue_requests": 512, 00:25:09.129 "delay_cmd_submit": true, 00:25:09.129 "transport_retry_count": 4, 00:25:09.129 "bdev_retry_count": 3, 00:25:09.129 "transport_ack_timeout": 0, 00:25:09.129 "ctrlr_loss_timeout_sec": 0, 00:25:09.129 "reconnect_delay_sec": 0, 00:25:09.129 "fast_io_fail_timeout_sec": 0, 00:25:09.129 "disable_auto_failback": false, 00:25:09.129 "generate_uuids": false, 00:25:09.129 "transport_tos": 0, 00:25:09.129 "nvme_error_stat": false, 00:25:09.129 "rdma_srq_size": 0, 00:25:09.129 "io_path_stat": false, 00:25:09.129 "allow_accel_sequence": false, 00:25:09.129 "rdma_max_cq_size": 0, 00:25:09.129 "rdma_cm_event_timeout_ms": 0, 00:25:09.129 "dhchap_digests": [ 00:25:09.129 "sha256", 00:25:09.129 "sha384", 00:25:09.129 "sha512" 00:25:09.129 ], 00:25:09.129 "dhchap_dhgroups": [ 00:25:09.129 "null", 00:25:09.129 "ffdhe2048", 00:25:09.129 "ffdhe3072", 00:25:09.129 "ffdhe4096", 00:25:09.129 "ffdhe6144", 00:25:09.129 "ffdhe8192" 00:25:09.129 ] 00:25:09.129 } 00:25:09.129 }, 00:25:09.129 { 00:25:09.129 "method": "bdev_nvme_attach_controller", 00:25:09.129 "params": { 00:25:09.129 "name": "nvme0", 00:25:09.129 "trtype": "TCP", 00:25:09.129 "adrfam": "IPv4", 00:25:09.129 "traddr": "127.0.0.1", 00:25:09.129 "trsvcid": "4420", 00:25:09.129 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:09.129 "prchk_reftag": false, 00:25:09.129 "prchk_guard": false, 00:25:09.129 "ctrlr_loss_timeout_sec": 0, 00:25:09.129 "reconnect_delay_sec": 0, 00:25:09.129 "fast_io_fail_timeout_sec": 0, 00:25:09.129 "psk": "key0", 00:25:09.129 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:09.129 "hdgst": false, 00:25:09.129 "ddgst": false 00:25:09.129 } 00:25:09.129 }, 00:25:09.129 { 00:25:09.129 "method": "bdev_nvme_set_hotplug", 00:25:09.129 "params": { 00:25:09.129 "period_us": 100000, 00:25:09.129 "enable": false 00:25:09.129 } 00:25:09.129 }, 00:25:09.129 { 00:25:09.129 "method": "bdev_wait_for_examine" 00:25:09.129 } 00:25:09.129 ] 00:25:09.129 }, 00:25:09.129 { 00:25:09.129 "subsystem": "nbd", 00:25:09.129 "config": [] 00:25:09.129 } 00:25:09.129 ] 00:25:09.129 }' 00:25:09.129 16:17:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:09.129 16:17:39 -- common/autotest_common.sh@10 -- # set +x 00:25:09.388 [2024-04-15 16:17:39.124311] Starting SPDK v24.05-pre git sha1 26d44a121 / DPDK 22.11.4 initialization... 00:25:09.388 [2024-04-15 16:17:39.124657] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96595 ] 00:25:09.388 [2024-04-15 16:17:39.263538] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:09.388 [2024-04-15 16:17:39.316000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:09.646 [2024-04-15 16:17:39.476638] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:10.581 16:17:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:10.581 16:17:40 -- common/autotest_common.sh@850 -- # return 0 00:25:10.581 16:17:40 -- keyring/file.sh@117 -- # bperf_cmd keyring_get_keys 00:25:10.581 16:17:40 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:10.581 16:17:40 -- keyring/file.sh@117 -- # jq length 00:25:10.581 16:17:40 -- keyring/file.sh@117 -- # (( 2 == 2 )) 00:25:10.581 16:17:40 -- keyring/file.sh@118 -- # get_refcnt key0 00:25:10.581 16:17:40 -- keyring/common.sh@12 -- # get_key key0 00:25:10.581 16:17:40 -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:10.581 16:17:40 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:10.581 16:17:40 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:10.581 16:17:40 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:10.839 16:17:40 -- keyring/file.sh@118 -- # (( 2 == 2 )) 00:25:10.839 16:17:40 -- keyring/file.sh@119 -- # get_refcnt key1 00:25:10.839 16:17:40 -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:10.839 16:17:40 -- keyring/common.sh@12 -- # get_key key1 00:25:10.839 16:17:40 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:10.839 16:17:40 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:10.839 16:17:40 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:11.404 16:17:41 -- keyring/file.sh@119 -- # (( 1 == 1 )) 00:25:11.404 16:17:41 -- keyring/file.sh@120 -- # bperf_cmd bdev_nvme_get_controllers 00:25:11.404 16:17:41 -- keyring/file.sh@120 -- # jq -r '.[].name' 00:25:11.404 16:17:41 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:25:11.663 16:17:41 -- keyring/file.sh@120 -- # [[ nvme0 == nvme0 ]] 00:25:11.663 16:17:41 -- keyring/file.sh@1 -- # cleanup 00:25:11.663 16:17:41 -- keyring/file.sh@19 -- # rm -f /tmp/tmp.xZ80omkYRT /tmp/tmp.QNHRBP9o0c 00:25:11.663 16:17:41 -- keyring/file.sh@20 -- # killprocess 96595 00:25:11.663 16:17:41 -- common/autotest_common.sh@936 -- # '[' -z 96595 ']' 00:25:11.663 16:17:41 -- common/autotest_common.sh@940 -- # kill -0 96595 00:25:11.663 16:17:41 -- common/autotest_common.sh@941 -- # uname 00:25:11.663 16:17:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:11.663 16:17:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 96595 00:25:11.663 16:17:41 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:11.663 16:17:41 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:11.663 16:17:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 96595' 00:25:11.663 killing process with pid 96595 00:25:11.663 Received shutdown signal, test time was about 1.000000 seconds 00:25:11.663 00:25:11.663 Latency(us) 00:25:11.663 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:11.663 =================================================================================================================== 00:25:11.663 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:11.663 16:17:41 -- common/autotest_common.sh@955 -- # kill 96595 00:25:11.663 16:17:41 -- common/autotest_common.sh@960 -- # wait 96595 00:25:11.921 16:17:41 -- keyring/file.sh@21 -- # killprocess 96311 00:25:11.921 16:17:41 -- common/autotest_common.sh@936 -- # '[' -z 96311 ']' 00:25:11.921 16:17:41 -- common/autotest_common.sh@940 -- # kill -0 96311 00:25:11.921 16:17:41 -- common/autotest_common.sh@941 -- # uname 00:25:11.922 16:17:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:11.922 16:17:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 96311 00:25:11.922 16:17:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:11.922 killing process with pid 96311 00:25:11.922 16:17:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:11.922 16:17:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 96311' 00:25:11.922 16:17:41 -- common/autotest_common.sh@955 -- # kill 96311 00:25:11.922 [2024-04-15 16:17:41.685447] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:11.922 16:17:41 -- common/autotest_common.sh@960 -- # wait 96311 00:25:12.180 00:25:12.180 real 0m16.992s 00:25:12.180 user 0m42.890s 00:25:12.180 sys 0m3.478s 00:25:12.180 16:17:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:12.180 16:17:42 -- common/autotest_common.sh@10 -- # set +x 00:25:12.180 ************************************ 00:25:12.180 END TEST keyring_file 00:25:12.180 ************************************ 00:25:12.180 16:17:42 -- spdk/autotest.sh@294 -- # [[ n == y ]] 00:25:12.180 16:17:42 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:25:12.180 16:17:42 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:25:12.180 16:17:42 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:25:12.180 16:17:42 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:25:12.180 16:17:42 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:25:12.180 16:17:42 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:25:12.180 16:17:42 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:25:12.180 16:17:42 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:25:12.180 16:17:42 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:25:12.180 16:17:42 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:25:12.180 16:17:42 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:25:12.180 16:17:42 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:25:12.180 16:17:42 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:25:12.180 16:17:42 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:25:12.180 16:17:42 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:25:12.180 16:17:42 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:25:12.180 16:17:42 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:25:12.180 16:17:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:12.180 16:17:42 -- common/autotest_common.sh@10 -- # set +x 00:25:12.180 16:17:42 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:25:12.180 16:17:42 -- common/autotest_common.sh@1378 -- # local autotest_es=0 00:25:12.180 16:17:42 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:25:12.180 16:17:42 -- common/autotest_common.sh@10 -- # set +x 00:25:14.081 INFO: APP EXITING 00:25:14.081 INFO: killing all VMs 00:25:14.081 INFO: killing vhost app 00:25:14.081 INFO: EXIT DONE 00:25:14.647 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:14.647 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:25:14.647 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:25:15.587 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:15.587 Cleaning 00:25:15.587 Removing: /var/run/dpdk/spdk0/config 00:25:15.587 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:25:15.587 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:25:15.587 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:25:15.587 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:25:15.587 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:25:15.587 Removing: /var/run/dpdk/spdk0/hugepage_info 00:25:15.587 Removing: /var/run/dpdk/spdk1/config 00:25:15.587 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:25:15.587 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:25:15.587 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:25:15.587 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:25:15.587 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:25:15.587 Removing: /var/run/dpdk/spdk1/hugepage_info 00:25:15.587 Removing: /var/run/dpdk/spdk2/config 00:25:15.587 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:25:15.587 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:25:15.587 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:25:15.587 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:25:15.587 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:25:15.587 Removing: /var/run/dpdk/spdk2/hugepage_info 00:25:15.587 Removing: /var/run/dpdk/spdk3/config 00:25:15.587 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:25:15.587 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:25:15.587 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:25:15.587 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:25:15.587 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:25:15.587 Removing: /var/run/dpdk/spdk3/hugepage_info 00:25:15.587 Removing: /var/run/dpdk/spdk4/config 00:25:15.587 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:25:15.587 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:25:15.587 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:25:15.587 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:25:15.587 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:25:15.587 Removing: /var/run/dpdk/spdk4/hugepage_info 00:25:15.587 Removing: /dev/shm/nvmf_trace.0 00:25:15.587 Removing: /dev/shm/spdk_tgt_trace.pid70914 00:25:15.587 Removing: /var/run/dpdk/spdk0 00:25:15.587 Removing: /var/run/dpdk/spdk1 00:25:15.587 Removing: /var/run/dpdk/spdk2 00:25:15.587 Removing: /var/run/dpdk/spdk3 00:25:15.587 Removing: /var/run/dpdk/spdk4 00:25:15.587 Removing: /var/run/dpdk/spdk_pid70751 00:25:15.587 Removing: /var/run/dpdk/spdk_pid70914 00:25:15.587 Removing: /var/run/dpdk/spdk_pid71138 00:25:15.587 Removing: /var/run/dpdk/spdk_pid71224 00:25:15.587 Removing: /var/run/dpdk/spdk_pid71244 00:25:15.587 Removing: /var/run/dpdk/spdk_pid71454 00:25:15.587 Removing: /var/run/dpdk/spdk_pid71650 00:25:15.587 Removing: /var/run/dpdk/spdk_pid71802 00:25:15.587 Removing: /var/run/dpdk/spdk_pid71877 00:25:15.587 Removing: /var/run/dpdk/spdk_pid71953 00:25:15.587 Removing: /var/run/dpdk/spdk_pid72041 00:25:15.587 Removing: /var/run/dpdk/spdk_pid72126 00:25:15.587 Removing: /var/run/dpdk/spdk_pid72172 00:25:15.587 Removing: /var/run/dpdk/spdk_pid72206 00:25:15.587 Removing: /var/run/dpdk/spdk_pid72272 00:25:15.587 Removing: /var/run/dpdk/spdk_pid72380 00:25:15.587 Removing: /var/run/dpdk/spdk_pid72818 00:25:15.587 Removing: /var/run/dpdk/spdk_pid72874 00:25:15.587 Removing: /var/run/dpdk/spdk_pid72924 00:25:15.587 Removing: /var/run/dpdk/spdk_pid72940 00:25:15.587 Removing: /var/run/dpdk/spdk_pid73011 00:25:15.587 Removing: /var/run/dpdk/spdk_pid73019 00:25:15.587 Removing: /var/run/dpdk/spdk_pid73091 00:25:15.587 Removing: /var/run/dpdk/spdk_pid73107 00:25:15.587 Removing: /var/run/dpdk/spdk_pid73158 00:25:15.587 Removing: /var/run/dpdk/spdk_pid73168 00:25:15.587 Removing: /var/run/dpdk/spdk_pid73218 00:25:15.587 Removing: /var/run/dpdk/spdk_pid73236 00:25:15.587 Removing: /var/run/dpdk/spdk_pid73367 00:25:15.587 Removing: /var/run/dpdk/spdk_pid73407 00:25:15.846 Removing: /var/run/dpdk/spdk_pid73486 00:25:15.846 Removing: /var/run/dpdk/spdk_pid73545 00:25:15.846 Removing: /var/run/dpdk/spdk_pid73568 00:25:15.846 Removing: /var/run/dpdk/spdk_pid73646 00:25:15.846 Removing: /var/run/dpdk/spdk_pid73683 00:25:15.846 Removing: /var/run/dpdk/spdk_pid73723 00:25:15.846 Removing: /var/run/dpdk/spdk_pid73756 00:25:15.846 Removing: /var/run/dpdk/spdk_pid73800 00:25:15.847 Removing: /var/run/dpdk/spdk_pid73833 00:25:15.847 Removing: /var/run/dpdk/spdk_pid73878 00:25:15.847 Removing: /var/run/dpdk/spdk_pid73911 00:25:15.847 Removing: /var/run/dpdk/spdk_pid73955 00:25:15.847 Removing: /var/run/dpdk/spdk_pid73988 00:25:15.847 Removing: /var/run/dpdk/spdk_pid74034 00:25:15.847 Removing: /var/run/dpdk/spdk_pid74067 00:25:15.847 Removing: /var/run/dpdk/spdk_pid74111 00:25:15.847 Removing: /var/run/dpdk/spdk_pid74144 00:25:15.847 Removing: /var/run/dpdk/spdk_pid74187 00:25:15.847 Removing: /var/run/dpdk/spdk_pid74222 00:25:15.847 Removing: /var/run/dpdk/spdk_pid74260 00:25:15.847 Removing: /var/run/dpdk/spdk_pid74302 00:25:15.847 Removing: /var/run/dpdk/spdk_pid74344 00:25:15.847 Removing: /var/run/dpdk/spdk_pid74388 00:25:15.847 Removing: /var/run/dpdk/spdk_pid74422 00:25:15.847 Removing: /var/run/dpdk/spdk_pid74499 00:25:15.847 Removing: /var/run/dpdk/spdk_pid74601 00:25:15.847 Removing: /var/run/dpdk/spdk_pid74921 00:25:15.847 Removing: /var/run/dpdk/spdk_pid74938 00:25:15.847 Removing: /var/run/dpdk/spdk_pid74984 00:25:15.847 Removing: /var/run/dpdk/spdk_pid74992 00:25:15.847 Removing: /var/run/dpdk/spdk_pid75013 00:25:15.847 Removing: /var/run/dpdk/spdk_pid75032 00:25:15.847 Removing: /var/run/dpdk/spdk_pid75041 00:25:15.847 Removing: /var/run/dpdk/spdk_pid75061 00:25:15.847 Removing: /var/run/dpdk/spdk_pid75080 00:25:15.847 Removing: /var/run/dpdk/spdk_pid75099 00:25:15.847 Removing: /var/run/dpdk/spdk_pid75109 00:25:15.847 Removing: /var/run/dpdk/spdk_pid75134 00:25:15.847 Removing: /var/run/dpdk/spdk_pid75147 00:25:15.847 Removing: /var/run/dpdk/spdk_pid75163 00:25:15.847 Removing: /var/run/dpdk/spdk_pid75182 00:25:15.847 Removing: /var/run/dpdk/spdk_pid75195 00:25:15.847 Removing: /var/run/dpdk/spdk_pid75211 00:25:15.847 Removing: /var/run/dpdk/spdk_pid75234 00:25:15.847 Removing: /var/run/dpdk/spdk_pid75243 00:25:15.847 Removing: /var/run/dpdk/spdk_pid75264 00:25:15.847 Removing: /var/run/dpdk/spdk_pid75299 00:25:15.847 Removing: /var/run/dpdk/spdk_pid75312 00:25:15.847 Removing: /var/run/dpdk/spdk_pid75342 00:25:15.847 Removing: /var/run/dpdk/spdk_pid75415 00:25:15.847 Removing: /var/run/dpdk/spdk_pid75454 00:25:15.847 Removing: /var/run/dpdk/spdk_pid75458 00:25:15.847 Removing: /var/run/dpdk/spdk_pid75497 00:25:15.847 Removing: /var/run/dpdk/spdk_pid75506 00:25:15.847 Removing: /var/run/dpdk/spdk_pid75514 00:25:15.847 Removing: /var/run/dpdk/spdk_pid75560 00:25:15.847 Removing: /var/run/dpdk/spdk_pid75574 00:25:15.847 Removing: /var/run/dpdk/spdk_pid75606 00:25:15.847 Removing: /var/run/dpdk/spdk_pid75616 00:25:15.847 Removing: /var/run/dpdk/spdk_pid75624 00:25:15.847 Removing: /var/run/dpdk/spdk_pid75635 00:25:15.847 Removing: /var/run/dpdk/spdk_pid75639 00:25:15.847 Removing: /var/run/dpdk/spdk_pid75654 00:25:15.847 Removing: /var/run/dpdk/spdk_pid75658 00:25:15.847 Removing: /var/run/dpdk/spdk_pid75673 00:25:15.847 Removing: /var/run/dpdk/spdk_pid75700 00:25:15.847 Removing: /var/run/dpdk/spdk_pid75737 00:25:15.847 Removing: /var/run/dpdk/spdk_pid75752 00:25:15.847 Removing: /var/run/dpdk/spdk_pid75779 00:25:15.847 Removing: /var/run/dpdk/spdk_pid75794 00:25:15.847 Removing: /var/run/dpdk/spdk_pid75796 00:25:15.847 Removing: /var/run/dpdk/spdk_pid75846 00:25:15.847 Removing: /var/run/dpdk/spdk_pid75852 00:25:15.847 Removing: /var/run/dpdk/spdk_pid75888 00:25:15.847 Removing: /var/run/dpdk/spdk_pid75896 00:25:15.847 Removing: /var/run/dpdk/spdk_pid75903 00:25:16.105 Removing: /var/run/dpdk/spdk_pid75911 00:25:16.105 Removing: /var/run/dpdk/spdk_pid75918 00:25:16.105 Removing: /var/run/dpdk/spdk_pid75926 00:25:16.106 Removing: /var/run/dpdk/spdk_pid75933 00:25:16.106 Removing: /var/run/dpdk/spdk_pid75941 00:25:16.106 Removing: /var/run/dpdk/spdk_pid76025 00:25:16.106 Removing: /var/run/dpdk/spdk_pid76067 00:25:16.106 Removing: /var/run/dpdk/spdk_pid76180 00:25:16.106 Removing: /var/run/dpdk/spdk_pid76221 00:25:16.106 Removing: /var/run/dpdk/spdk_pid76268 00:25:16.106 Removing: /var/run/dpdk/spdk_pid76287 00:25:16.106 Removing: /var/run/dpdk/spdk_pid76304 00:25:16.106 Removing: /var/run/dpdk/spdk_pid76324 00:25:16.106 Removing: /var/run/dpdk/spdk_pid76350 00:25:16.106 Removing: /var/run/dpdk/spdk_pid76371 00:25:16.106 Removing: /var/run/dpdk/spdk_pid76450 00:25:16.106 Removing: /var/run/dpdk/spdk_pid76466 00:25:16.106 Removing: /var/run/dpdk/spdk_pid76505 00:25:16.106 Removing: /var/run/dpdk/spdk_pid76562 00:25:16.106 Removing: /var/run/dpdk/spdk_pid76608 00:25:16.106 Removing: /var/run/dpdk/spdk_pid76636 00:25:16.106 Removing: /var/run/dpdk/spdk_pid76733 00:25:16.106 Removing: /var/run/dpdk/spdk_pid76786 00:25:16.106 Removing: /var/run/dpdk/spdk_pid76823 00:25:16.106 Removing: /var/run/dpdk/spdk_pid77082 00:25:16.106 Removing: /var/run/dpdk/spdk_pid77191 00:25:16.106 Removing: /var/run/dpdk/spdk_pid77229 00:25:16.106 Removing: /var/run/dpdk/spdk_pid77557 00:25:16.106 Removing: /var/run/dpdk/spdk_pid77596 00:25:16.106 Removing: /var/run/dpdk/spdk_pid77896 00:25:16.106 Removing: /var/run/dpdk/spdk_pid78310 00:25:16.106 Removing: /var/run/dpdk/spdk_pid78581 00:25:16.106 Removing: /var/run/dpdk/spdk_pid79360 00:25:16.106 Removing: /var/run/dpdk/spdk_pid80186 00:25:16.106 Removing: /var/run/dpdk/spdk_pid80298 00:25:16.106 Removing: /var/run/dpdk/spdk_pid80366 00:25:16.106 Removing: /var/run/dpdk/spdk_pid81624 00:25:16.106 Removing: /var/run/dpdk/spdk_pid81848 00:25:16.106 Removing: /var/run/dpdk/spdk_pid82156 00:25:16.106 Removing: /var/run/dpdk/spdk_pid82270 00:25:16.106 Removing: /var/run/dpdk/spdk_pid82398 00:25:16.106 Removing: /var/run/dpdk/spdk_pid82418 00:25:16.106 Removing: /var/run/dpdk/spdk_pid82438 00:25:16.106 Removing: /var/run/dpdk/spdk_pid82466 00:25:16.106 Removing: /var/run/dpdk/spdk_pid82558 00:25:16.106 Removing: /var/run/dpdk/spdk_pid82692 00:25:16.106 Removing: /var/run/dpdk/spdk_pid82827 00:25:16.106 Removing: /var/run/dpdk/spdk_pid82902 00:25:16.106 Removing: /var/run/dpdk/spdk_pid83094 00:25:16.106 Removing: /var/run/dpdk/spdk_pid83158 00:25:16.106 Removing: /var/run/dpdk/spdk_pid83243 00:25:16.106 Removing: /var/run/dpdk/spdk_pid83551 00:25:16.106 Removing: /var/run/dpdk/spdk_pid83899 00:25:16.106 Removing: /var/run/dpdk/spdk_pid83906 00:25:16.106 Removing: /var/run/dpdk/spdk_pid86120 00:25:16.106 Removing: /var/run/dpdk/spdk_pid86122 00:25:16.106 Removing: /var/run/dpdk/spdk_pid86400 00:25:16.106 Removing: /var/run/dpdk/spdk_pid86424 00:25:16.106 Removing: /var/run/dpdk/spdk_pid86439 00:25:16.106 Removing: /var/run/dpdk/spdk_pid86480 00:25:16.106 Removing: /var/run/dpdk/spdk_pid86486 00:25:16.106 Removing: /var/run/dpdk/spdk_pid86575 00:25:16.106 Removing: /var/run/dpdk/spdk_pid86588 00:25:16.106 Removing: /var/run/dpdk/spdk_pid86696 00:25:16.106 Removing: /var/run/dpdk/spdk_pid86698 00:25:16.106 Removing: /var/run/dpdk/spdk_pid86812 00:25:16.106 Removing: /var/run/dpdk/spdk_pid86819 00:25:16.106 Removing: /var/run/dpdk/spdk_pid87193 00:25:16.106 Removing: /var/run/dpdk/spdk_pid87241 00:25:16.106 Removing: /var/run/dpdk/spdk_pid87325 00:25:16.106 Removing: /var/run/dpdk/spdk_pid87374 00:25:16.366 Removing: /var/run/dpdk/spdk_pid87670 00:25:16.366 Removing: /var/run/dpdk/spdk_pid87867 00:25:16.366 Removing: /var/run/dpdk/spdk_pid88239 00:25:16.366 Removing: /var/run/dpdk/spdk_pid88715 00:25:16.366 Removing: /var/run/dpdk/spdk_pid89305 00:25:16.366 Removing: /var/run/dpdk/spdk_pid89311 00:25:16.366 Removing: /var/run/dpdk/spdk_pid91243 00:25:16.366 Removing: /var/run/dpdk/spdk_pid91308 00:25:16.366 Removing: /var/run/dpdk/spdk_pid91364 00:25:16.366 Removing: /var/run/dpdk/spdk_pid91416 00:25:16.366 Removing: /var/run/dpdk/spdk_pid91534 00:25:16.366 Removing: /var/run/dpdk/spdk_pid91581 00:25:16.366 Removing: /var/run/dpdk/spdk_pid91634 00:25:16.366 Removing: /var/run/dpdk/spdk_pid91689 00:25:16.366 Removing: /var/run/dpdk/spdk_pid92006 00:25:16.366 Removing: /var/run/dpdk/spdk_pid93182 00:25:16.366 Removing: /var/run/dpdk/spdk_pid93314 00:25:16.366 Removing: /var/run/dpdk/spdk_pid93558 00:25:16.366 Removing: /var/run/dpdk/spdk_pid94110 00:25:16.366 Removing: /var/run/dpdk/spdk_pid94273 00:25:16.366 Removing: /var/run/dpdk/spdk_pid94436 00:25:16.366 Removing: /var/run/dpdk/spdk_pid94533 00:25:16.366 Removing: /var/run/dpdk/spdk_pid94687 00:25:16.366 Removing: /var/run/dpdk/spdk_pid94800 00:25:16.366 Removing: /var/run/dpdk/spdk_pid95479 00:25:16.366 Removing: /var/run/dpdk/spdk_pid95514 00:25:16.366 Removing: /var/run/dpdk/spdk_pid95549 00:25:16.366 Removing: /var/run/dpdk/spdk_pid95803 00:25:16.366 Removing: /var/run/dpdk/spdk_pid95833 00:25:16.366 Removing: /var/run/dpdk/spdk_pid95869 00:25:16.366 Removing: /var/run/dpdk/spdk_pid96311 00:25:16.366 Removing: /var/run/dpdk/spdk_pid96330 00:25:16.366 Removing: /var/run/dpdk/spdk_pid96595 00:25:16.366 Clean 00:25:16.625 16:17:46 -- common/autotest_common.sh@1437 -- # return 0 00:25:16.625 16:17:46 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:25:16.625 16:17:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:16.625 16:17:46 -- common/autotest_common.sh@10 -- # set +x 00:25:16.625 16:17:46 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:25:16.625 16:17:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:16.625 16:17:46 -- common/autotest_common.sh@10 -- # set +x 00:25:16.625 16:17:46 -- spdk/autotest.sh@385 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:25:16.625 16:17:46 -- spdk/autotest.sh@387 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:25:16.625 16:17:46 -- spdk/autotest.sh@387 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:25:16.625 16:17:46 -- spdk/autotest.sh@389 -- # hash lcov 00:25:16.625 16:17:46 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:25:16.625 16:17:46 -- spdk/autotest.sh@391 -- # hostname 00:25:16.625 16:17:46 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1701806725-069-updated-1701632595 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:25:16.913 geninfo: WARNING: invalid characters removed from testname! 00:25:43.461 16:18:13 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:46.743 16:18:16 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:50.032 16:18:19 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:52.575 16:18:22 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:55.110 16:18:24 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:57.013 16:18:26 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:59.546 16:18:29 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:25:59.546 16:18:29 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:59.546 16:18:29 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:25:59.546 16:18:29 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:59.546 16:18:29 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:59.546 16:18:29 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.546 16:18:29 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.546 16:18:29 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.546 16:18:29 -- paths/export.sh@5 -- $ export PATH 00:25:59.546 16:18:29 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.546 16:18:29 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:25:59.546 16:18:29 -- common/autobuild_common.sh@435 -- $ date +%s 00:25:59.546 16:18:29 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713197909.XXXXXX 00:25:59.546 16:18:29 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713197909.hDbjh8 00:25:59.546 16:18:29 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:25:59.546 16:18:29 -- common/autobuild_common.sh@441 -- $ '[' -n v22.11.4 ']' 00:25:59.546 16:18:29 -- common/autobuild_common.sh@442 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:25:59.546 16:18:29 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:25:59.546 16:18:29 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:25:59.546 16:18:29 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:25:59.546 16:18:29 -- common/autobuild_common.sh@451 -- $ get_config_params 00:25:59.546 16:18:29 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:25:59.546 16:18:29 -- common/autotest_common.sh@10 -- $ set +x 00:25:59.547 16:18:29 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:25:59.547 16:18:29 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:25:59.547 16:18:29 -- pm/common@17 -- $ local monitor 00:25:59.547 16:18:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:59.547 16:18:29 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=98181 00:25:59.547 16:18:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:59.547 16:18:29 -- pm/common@21 -- $ date +%s 00:25:59.547 16:18:29 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=98183 00:25:59.547 16:18:29 -- pm/common@26 -- $ sleep 1 00:25:59.547 16:18:29 -- pm/common@21 -- $ date +%s 00:25:59.547 16:18:29 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1713197909 00:25:59.547 16:18:29 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1713197909 00:25:59.547 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1713197909_collect-vmstat.pm.log 00:25:59.547 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1713197909_collect-cpu-load.pm.log 00:26:00.481 16:18:30 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:26:00.481 16:18:30 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:26:00.481 16:18:30 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:26:00.481 16:18:30 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:26:00.481 16:18:30 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:26:00.481 16:18:30 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:26:00.481 16:18:30 -- spdk/autopackage.sh@19 -- $ timing_finish 00:26:00.481 16:18:30 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:26:00.481 16:18:30 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:26:00.481 16:18:30 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:26:00.481 16:18:30 -- spdk/autopackage.sh@20 -- $ exit 0 00:26:00.481 16:18:30 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:26:00.481 16:18:30 -- pm/common@30 -- $ signal_monitor_resources TERM 00:26:00.481 16:18:30 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:26:00.481 16:18:30 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:00.481 16:18:30 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:26:00.481 16:18:30 -- pm/common@45 -- $ pid=98190 00:26:00.481 16:18:30 -- pm/common@52 -- $ sudo kill -TERM 98190 00:26:00.481 16:18:30 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:00.481 16:18:30 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:26:00.481 16:18:30 -- pm/common@45 -- $ pid=98189 00:26:00.481 16:18:30 -- pm/common@52 -- $ sudo kill -TERM 98189 00:26:00.740 + [[ -n 5775 ]] 00:26:00.740 + sudo kill 5775 00:26:00.751 [Pipeline] } 00:26:00.770 [Pipeline] // timeout 00:26:00.776 [Pipeline] } 00:26:00.794 [Pipeline] // stage 00:26:00.800 [Pipeline] } 00:26:00.818 [Pipeline] // catchError 00:26:00.827 [Pipeline] stage 00:26:00.829 [Pipeline] { (Stop VM) 00:26:00.844 [Pipeline] sh 00:26:01.124 + vagrant halt 00:26:05.310 ==> default: Halting domain... 00:26:11.961 [Pipeline] sh 00:26:12.242 + vagrant destroy -f 00:26:16.498 ==> default: Removing domain... 00:26:16.509 [Pipeline] sh 00:26:16.793 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:26:16.802 [Pipeline] } 00:26:16.820 [Pipeline] // stage 00:26:16.826 [Pipeline] } 00:26:16.844 [Pipeline] // dir 00:26:16.849 [Pipeline] } 00:26:16.865 [Pipeline] // wrap 00:26:16.871 [Pipeline] } 00:26:16.883 [Pipeline] // catchError 00:26:16.892 [Pipeline] stage 00:26:16.895 [Pipeline] { (Epilogue) 00:26:16.909 [Pipeline] sh 00:26:17.184 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:26:23.764 [Pipeline] catchError 00:26:23.766 [Pipeline] { 00:26:23.781 [Pipeline] sh 00:26:24.060 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:26:24.318 Artifacts sizes are good 00:26:24.326 [Pipeline] } 00:26:24.343 [Pipeline] // catchError 00:26:24.353 [Pipeline] archiveArtifacts 00:26:24.360 Archiving artifacts 00:26:24.549 [Pipeline] cleanWs 00:26:24.560 [WS-CLEANUP] Deleting project workspace... 00:26:24.560 [WS-CLEANUP] Deferred wipeout is used... 00:26:24.566 [WS-CLEANUP] done 00:26:24.568 [Pipeline] } 00:26:24.585 [Pipeline] // stage 00:26:24.590 [Pipeline] } 00:26:24.604 [Pipeline] // node 00:26:24.611 [Pipeline] End of Pipeline 00:26:24.646 Finished: SUCCESS